跳过导航
跳过mega-menu
的帖子

The Generation Game: Points to consider when leveraging generative AI in business

Many of us will by now have played around with Chat-GPT, Dall-E and other generative AI products. 我们将看到这些工具是多么强大和令人印象深刻. 我们可能也想知道这一切究竟会走向何方, 这对我们的业务意味着什么, 以及事情变化的速度有多快. While the answers to the first two questions are open to speculation, the third one is easier. 事情发展得很快.

What is also clear is that tools of this nature are great servants, but dangerous masters. 虽然将这些技术“即插即用”应用到企业中可能很诱人, if experimentation with generative AI moves beyond the test bench and into the real world of developing products and services for sale, 事情很快就会变得真实. Organisations could be storing up significant problems if they don’t handle it in the right way.

The issue is simple to state – as generative AI becomes embedded into the toolset of everyday work, organisations must take a moment to think what they are ingesting and outputting. The ‘winners’ will be businesses which implement the structures that ensure they can harness the benefits of AI without exposing themselves to undue liability and risk.

企业做不到这一点的地方, there is a danger that generative AI will be used in that organisation as a short-cut way to produce plausible but unreliable outputs, 然后,他们几乎不用多想就可以自由地进入更广阔的世界.

In concrete terms, organisations should be mindful of the following key risks:

  • Reputational damage – using generative AI that creates biased or low-quality outcomes for customers, 冒着声誉严重受损的风险.
  • Project delays – the use of generative AI without proper monitoring gives rise to a meaningful risk that projects which use it may need to be scrapped or re-done, 这牵涉到所有的成本和时间.
  • Loss of valuable corporate information – colleagues inputting 文本 into the tool in breach of normal rules regarding use of corporate information could risk loss of valuable corporate data.
  • 信赖风险——使用未经试验的技术, 这些都是未经验证的, 可能, 至少造成尴尬, 在最坏的情况下,会导致对基于错误的决策承担责任. 至少以目前的形式是这样, many of these generative AI’s are capable of (convincingly) presenting inaccurate information as if it were fact. 因此,任何输出都需要仔细地进行事实核查和审查.
  • 知识产权侵权生成人工智能可以创造新的内容, code, 文本, 以及瞬间的影像, but how do you know whether this is infringing third party intellectual property rights?
  • Regulatory breaches – there is an emerging body of worldwide regulation which all uses of generative AI will need to comply with. 这包括什么是可接受的和不可接受的用例, 在使用人工智能时,需要向客户提供哪些信息, 甚至是注册要求. Any use outside the terms of these regulations runs the risk of landing an organisation with regulatory fines or compensation to customers who have been impacted.
  • Security and Data – generative AI products ingest sensitive and personal data from within organisations as well as from outside sources. The risk of data loss or misuse is significant if this use is not properly understood and managed. 除了, any sharing of content containing personal data with such generative AI systems is likely to run afoul of privacy laws.
  • Contractual breaches – everything from confidentiality clauses to subcontracting requirements could be breached if an organisation seeks to use cloud-based generative AI systems as part of their delivery model to customers.
  • Availability and service levels – while ‘professional’ versions of some of these tools are being released with availability commitments and service levels, it would be dangerous to try to build a business function around the continued availability of free tools which 可能 easily be offline at critical periods.
  • 劳资关系问题——如果员工看到人工智能被使用, 这可能会引起十大网博靠谱平台角色和就业保障的问题.

考虑到这些风险, what then should be the focus for organisations in the first part of 2023 when considering the use of generative AI? 我们认为,这一切都与善治有关.

  • Ensuring that any new product or service using generative AI is subject to proper scoping and risk/benefit assessment, to ensure it can be used in a way which is compliant and to understand what safeguards need to be in place.
  • Take some time to understand how the system works, its capabilities and its limitations. 它是如何训练的? 训练数据集是最新的吗? What biases have been (potentially inadvertently) encoded within the AI by that training data set?
  • 在产品或服务推出前对其进行测试, 并在其整个生命周期中持续进行, 以确保其在法律范围内按预期运作.
  • Ensuring that there is human oversight of the output of generative AI before it is embedded in a product or service.
  • Putting in place safeguards against the product or service being used in a way that was not intended when initially launched.
  • 确保满足所有法规要求, 包括, 在必要时, 记录, 审核跟踪和产品注册.

This doesn’t mean that organisations should be slowing down their exploration and use of generative AI. 应该发生什么?, 与探索并行, is that organisations must ensure they have the right checks and balances in place to use it safely.

如果你想了解更多的问题, 以及你应该采取的步骤, 请浏览我们的 网站.

澳门十大正规赌博娱乐平台

在这里注册