What OpenAI’s Contract Agent Gets Right, and What It Misses

See the benefits and risks of OpenAI's Contract Agent, highlighting its efficiency and the crucial need for human oversight.
What OpenAI’s Contract Agent Gets Right, and What It Misses

OpenAI‘s Contract Agent makes checking deals fast, up by 50%. Teams now handle over 1,000 deals each month with few added staff. It works in three parts: taking in (changing all formats to useful data), thinking (spotting main points and possible issues), and checking (people confirm what AI found). This mixed way gets things done faster and more smoothly.

But, using AI a lot brings risks. Trusting machines too much, fewer skills among people checking, worries about data safety, and likely legal issues are big problems. Leaders need to mix AI and people well, keep good records, and set up strong rules to lower these risks.

Main Points:

  • Good Things: Faster checks, less work, and saving money.
  • Bad Things: Too much trust on AI, missed mistakes, and rule-following issues.
  • Fixes: Regular people checks, good logs, and strong steps for data safety.

OpenAI’s tool shows that AI can make complex tasks easier, but right safety steps are key to make sure it’s safe and follows rules.

Turning contracts into searchable data at OpenAI

OpenAI

What OpenAI Made

OpenAI’s Contract Data Agent runs with a simple three-step method that turns messy, mixed-up contract files into neat, easy-to-search data.

Stage 1: Ingest In this step, the system works on many file types – like PDFs, scans, and even photos – pulling them into one clear flow. This makes sure that no matter the shape, the data is set for deep look and expert check later.

Stage 2: Inference This is the exciting part. Using smart prompts, the tool picks out only the need-to-know parts of a contract, skipping the rest. It checks the terms, spots any weird words, and pretty much serves as a first check.

Stage 3: Review In the last step, money pros come in to go over the set output. The system points out odd or rare terms and gives tips for more checks. The tidy data is then saved in a list style in a data store, making it simple to add to current money steps. OpenAI says this process cuts down contract check times by 50%, with reviews now done “overnight” instead of taking hours. As Wei An Lee, an AI Engineer at OpenAI, said, “in less than six months, the team went from reviewing hundreds of contracts each month to more than a thousand”.

It’s key to say that OpenAI calls the Contract Data Agent an inside test, made just to meet its own growth needs. This marks its use as a ground for new ways in the firm.

The Benefit is True

OpenAI doesn’t just show off tech skills – it brings actual, clear gains for money and buying teams. The OpenAI Contract Data Agent can drop the time it takes to look over contracts by half, and lets things grow without more people[1]. For CFOs facing more contracts and small money plans, this means quicker work and less cost to run things.

sbb-itb-5f0736d

Hidden Dangers That Leaders Must Handle

While AI-driven contract checking speeds up work, leaders must look out for some risks before fully using this tech. Here, we lay out these risks and the steps needed to handle them well.

Getting Too Comfortable with Automation

A big worry is that people trust AI too much. When reviewing contracts, this may lead legal folks to take AI results without deep checking. The risk? Bad clauses may go unnoticed. Keeping a good mix of AI and human checks is key to spotting problems.

Skills Getting Rusty

Using AI a lot may make human reviewers less sharp. Over time, legal teams might find it hard to spot complex issues, especially if AI results are wrong or lacking. Regular training and being part of the review often is key to keep teams alert and skilled.

Bias and Fairness

AI works as well as the data it learned from. This means it’s good at spotting usual patterns but may miss new or odd contract terms. Leaders must make sure that these rare or new terms get careful review, especially if they are beyond what the AI knows.

To defend decisions in legal or official checks, firms need strong internal rules and clear records. A well-tracked record – showing AI results, human checks, and final choices – is vital for clearness and being responsible.

Keeping Data Safe and Private

Contract data often has private info, like worker details, deals, or money terms. Even when used inside, AI must follow tough rules on data use, keep data only as needed, and make sure it’s private.

The Responsibility is on the User

Firms can’t blame errors on AI. Rules make it clear that companies must answer for AI-driven choices. Strong risk handling, clear logging, and checking are a must to meet these needs.

These risks show why good rules are crucial, which we’ll talk about next. By facing these problems head-on, leaders can enjoy the upsides of AI while keeping their firm safe.

A Simple Guide to Good Governance

Here’s a list to help you set up contract AI with strong control. It tackles risks by using clear, counted checks.

Human Check Levels Set levels so that higher-risk parts – like fault, cover, and end – always get a human check. Simple items like payment times and redo can pass if the machine is sure enough. This fits guidance on human checks.

Counting Checks Check how well the AI works using exactness and miss-hit rates. Test the system on some saved contract examples to see how right and safe it is.

Handling Exceptions Make auto alerts for odd parts, send them to top checkers. Use a “double check” way to make sure of close watching. Write down every step up to keep track and help even checking.

Data Rules Know all private and key info in contracts. Follow rules that limit data use to clear needs, collect only what’s needed, set how long to keep data, and lock access by role and code.

Check Records Keep full logs that note model types, cues, data spots, checker info, and time of choices. This deep record is key for legal and check needs.

Keep Watching Often look over check results and track step-up rates. Use what you learn to better rules over time. Experience shows that you can handle way more contracts each month with just a few more people.

Flat search might help, but to get true, tight results you need up and down AI Workers made to follow set rules.

Look at what OpenAI did – they made a simple pull-out tool. But, when big work comes into play, things shift. Companies want AI Workers that are not just cut out for the job but also start with rule controls.

Enter ThoughtFocus Build, they make AI Workers just right for each money task. Like:

  • The Procurement Worker not only spots odd terms; it also sends them off for checks and keeps logs to stay in line with GDPR.
  • The Revenue Recognition Worker matches terms with ASC 606 laws while keeping a close log.
  • The Vendor Risk Worker puts tight who-can-do-what rules in place to keep contract dangers low.

By mixing parts like check levels, odd-case paths, and deep logs, these choices do more than lower risks – they turn how things run into clear money worth. Not like basic tools that just make checks fast, these up and down AI Workers get built with rules at heart, hitting the top marks CFOs, counsels, and buying leaders look for during checks and law needs.

In short, they mix rule-following and smart work to give results that big groups can count on.

Disclaimer: The views and opinions expressed in this blog post are those of the author and do not necessarily reflect the official policy or position of ThoughtFocus. This content is provided for informational purposes only and should not be considered professional advice.

Share:

In this article

Interested in AI?

Let's discuss use cases.

Blog contact form
Areas of Interest (check all that apply)