Gen AI: Do You Want to Write or Edit?

NEWSLETTER VOLUME 2.15

|

April 12, 2024

Editor's Note

Gen AI: Do You Want to Write or Edit?

The idea of telling a computer to write the complicated thing you don't want to write is enticing. The results don't seem to be much worse than all the other corporate speak filling up our email and other channels. What could possibly go wrong?

 

A bunch of stuff. And the article below does a great job explaining some of the big problems.

 

I want to talk about writing and editing. Writing well takes time. It's also intimidating. Your words are out in the world and on the record where you will probably either be judged or ignored. Neither is fun.

 

The temptation to delegate it to a machine is strong. But machines don't understand anything. That's because they don't actually think and are not really intelligent. They're just artificial.

 

When we ask machines to do our writing, they look at the sea of words they have and the order of those words in relation to other words. The sentences that come out are based on pattern detection and a prediction based on how those words have been used before. It has nothing to do with accuracy, truth, or usefulness. It may not even make sense. And in order to avoid creating the same things over and over, the systems also shuffle content and just make stuff up.

 

Gen AI also only does what you ask it to do. So, if you don't understand what you want or how to write a clear, logical, and complete query, Gen AI won't help.

 

Even if you get something passable, then you still need to read it carefully and edit it so that it makes sense and is accurate. If you don't know the material well enough to do this, Gen AI won't help.

 

Your choice is: do you want to write or do you want to edit? Either way you still need to know and understand the material, which is the hard part.

 

- Heather Bussing

 

Artificial intelligence (AI) is becoming increasingly prevalent in workplaces, providing new opportunities as well as new challenges for employers and employees. While AI has the potential to improve efficiency and productivity, its use also raises important questions around issues like privacy, discrimination, and job displacement. Employers who choose to implement AI should consider including a provision in their employee handbook, or a separate policy, specifically addressing its use. Such a provision or policy can help mitigate risks, provide clarity for employees, and demonstrate an employer’s commitment to using AI ethically and responsibly.

Employers who incorporate AI into the workforce should develop policies governing appropriate use of generative AI, regularly update those policies as laws and technology continue to change, and enforce their policies. Employers should consider the following provisions in their AI use policies:

Specify Which Employees May Use AI and Require Prior Approval

For any number of reasons, employers may be willing to let some teams or groups, but not others, use generative AI technology, especially while the employer is still examining how AI can be incorporated in their company or industry. An AI policy should specify which departments, if any, are permitted to use AI.

Determine Which Tasks May Be Performed Using AI

Similarly, employers should define which tasks can be performed using AI. For example, you may approve your human resources team’s use of AI for screening initial applicants (which presents its own host of issues, including bias), but prohibit them from using AI to develop employment contracts or craft termination letters.

Make Employees Responsible for Outcomes

To ensure accountability, every AI use policy should explain that employees, as human beings, are ultimately responsible for the final product or output created or inspired by AI. This means employees should fact-check output, including (as appropriate) confirming that bias has not been introduced.

Prohibit Submission of Trade Secrets and Other Confidential Information

One of the biggest risks associated with generative AI is the possible loss of patent or trade secret rights, or breach of nondisclosure agreements with other entities, through the submission of sensitive or confidential information. For example, under U.S. Patent law, the public disclosure of inventive information may invalidate potential patent rights. See 35 U.S.C. § 102. Submitting sensitive information to generative AI, without the proper protective measures, may also be considered a public disclosure that waives protections for trade secrets or other confidential information. Further, information submitted to an AI tool may be used in unintended ways, such as to train the AI model. For these reasons, companies should clearly define “confidential information” and/or “trade secrets,” and prohibit the submission of such sensitive data to AI tools.

Consider Requiring Use Logs and Other Reporting

Employers can promote transparency and accountability by encouraging or requiring clear documentation of when and how AI tools are used by employees. Reporting or logging requirements can be flexible and tailored to each business. Consider when, to whom, and how often an employee should document their AI use, including whether it should include input, output, or both.

Oversight is Essential

In this same vein, designate an individual or department in your business that oversees the use of AI. Employees should direct all inquiries about AI use, and make any necessary reports, to this individual or department. This individual should also be tasked with updating the company’s AI policy and staying abreast of relevant legal or regulatory requirements.

Train Employees on Permissible AI Use and Enforce Your Policy

Of course, a written policy is only as good as the training provided and the enforcement of that policy. Regular training, especially in this evolving area, will be crucial for employees to understand the limits of permissible AI use while still promoting creativity and efficiency. Consistent, non-discriminatory enforcement of the policy will demonstrate the company’s commitment to ethical and transparent AI use.

The foregoing suggestions are only some of what should be considered when developing a workplace AI use policy. Employers should also gather input from relevant stakeholders within the organization and seek legal counsel (either internally or externally) when designing, implementing, and enforcing a policy.

It's Easy to Get Started

Transform compensation at your organization and get pay right — see how with a personalized demo.