Discussion about this post

User's avatar
Paul D'Arcy's avatar

This is so smart.

I just keep reading and rereading it.

So smart.

Expand full comment
pat kinsel's avatar

I think my experience at Notarize is relevant here, not because it is AI but because it offloads a core task (i.e. a mortgage closing) from someone. We "sell the work" by offloading the task (albeit to other people, not to AI) and allow our customers to repurpose employee time to other accretive tasks or to alter their staffing models. Much of our ROI story is rooted in that outcome. The primary challenge though is to actually get customers to adopt above a threshold required to change either their internal operations or staffing models. The beginning of a customer's adoption curve is the worst - they're forced to run bifurcated processes (actually costing more) AND the actual people you aim to offset are often critical in managing that transition. At Notarize, I've cribbed some thoughts from the medical and hospital industries which believes that a 30% adoption rate is required to see the ROI of actual process change, beyond which they can actually adjust the "standard of care" and make that new/better process the new normal. Getting to 30% is really hard and we've obsessed over doing just that, which is to convince our customers to make us the standard of care for mortgage closings, auto sales, you name it. I think it will be especially hard for AI in some of the industries you outline above, particularly legal/compliance/etc. Why? Everyone says LLMs drift, but that will surely be solved. Many of the things you described are considered the practice of law and people will need to adjudicate UPL claims. Fun!! I think the real issue is regulatory headwinds. Specifically, regulators are terrified of algorithms/machine learning/AI systems instituting global systemic bias. Take a property appraisal, which everyone agrees should be digitized. Regulators would rather local consumers connect with local appraisers to disperse and decentralize the bias - an African American consumer may get an African Appraiser and "win" on one and another might get a racist white appraiser and "lose" on another. That is better to them than a one nebulous system making judgements they cannot assess. And the government has no ability, mandate, or agency even to test these models. So how is a large bank who is constantly sued for unfair lending practices able to adopt these systems? If anything, recent advancements are showing any/all of these issues can be solved... but for AI to advance as you outline, it needs to think much more deeply about bias and how to instill the confidence required to take over from the humans who are obviously slow, but easier to regulate and more random in their outcomes.

Expand full comment
21 more comments...

No posts