Understanding the evolving value of AI in engineering

Retool Team
Retool Team
Remarkably fast

Jul 20, 2023

These days AI strategy is on almost every technical leader’s mind. Collectively and individually, folks are eyeing opportunities and weighing risks as we all work to unlock the potential of AI—from LLMs to machine learning—in a quickly evolving landscape.

So how should we evaluate the potential (or inevitable) changes to engineers’ work, anticipate those changes, and adapt to them?

Let’s talk about it.

We’re recapping insights from “Cutting through the noise: Understanding AI’s role in engineering,” a Retool-hosted roundtable that featured engineering leaders from LangChain, Pinterest, and Encamp. Read on for their key takeaways and perspectives on AI security and governance, AI in open source, and shipping AI-enabled product features—and to learn how these leaders see AI supporting developers’ success and productivity.

Quotes have been lightly edited for clarity and length.

On using AI to ramp up developer productivity

More and more companies are leaning on AI, particularly machine learning, to improve developer productivity. If you’ve been on the internet lately, you’ll certainly see occasional anxiety or rumbling that these changes could be the beginning of the end for software development roles as we know them.

But panelists Ankush Gola (co-founder of LangChain), Ben Jacobs (CTO at Encamp), and Chunyan Wang (Head of Big Data Infrastructure & Analytics Platform at Pinterest), had a down-to-earth take: these efforts to layer in AI to software development aren’t intended to replace the human but to augment their skills—so tasks like generating boilerplate code or translating natural language queries become more efficient.

“If we think about people as a knife,” posits Gopal Raman, Deployed Engineer at Retool and the roundtable’s moderator, “These AI products are like the stones that make people sharper, more effective.”

Pinterest, for example, is tapping into machine learning to tailor user content and turning to AI for code generation to assist developers. Both of these initiatives streamline the company’s internal operations, supporting human efforts rather than replacing them. Encamp and LangChain use AI-based tools like Github Copilot to help engineers solve problems, familiarize themselves with new technologies, and improve ideation and sharing. (“All of the engineers on our team have been making use of Copilot integrated with VS code,” says Ben. “We’ve been using it as a productivity force multiplier for individuals.”)

Indeed, “AI solves problems that traditionally would require human intelligence,” says Chunyan. But that doesn’t mean it’s a substitute for people.

AI-powered tools “can help you refine your thinking,” says Ben. “But at the end of the day, it’s important to still have the individual engineers themselves really taking ownership and responsibility, and deeply understanding the first principles [behind] the work. If you're using [AI tools to ‘pair program’ and] just kind of ‘code monkey’ along, that might be a good way to stand up a project, but it's not very good long term.”

Chunyan adds: “We’re not trying to completely automate away the programming work.” Sure, there might be a particular task that warrants automation, but “then it’s an actual engineer who owns [the] task, refining it and debugging it, making sure it’s tested end to end.”

Going one further, if AI can augment an engineer’s output, so too can an engineer augment AI’s (and often, they should). “I want to put in human checks to make sure we’re delivering correct information. We have experimental plans to begin including some AI-enabled internal tooling for our services departments,” says Ben. “But I think it’ll be a long time before we would feel comfortable fully turning over the keys.”

“I would hope to see that engineers are able to spend more of their time doing what I think is incredibly high-leverage work for humans.”

Even with human stewardship and oversight, once busy work is automated, developers may have more time to spare and, in turn, opportunities: “I would hope to see that engineers are able to spend more of their time doing what I think is incredibly high-leverage work for humans,” says Ben. “Teaching each other, breaking down concepts, making sure that we’re making the right decisions architecturally and spending less time writing tedious boilerplate code.”

On risks and security in AI deployments

As the use of AI tools becomes more widespread, managing the associated risks (from reliability to regulatory) and ensuring their security of data is vital—and the AI experts we talked to drove that point home.

“We’re in the type of industry that takes a while to build trust and not very long to lose [it],” says Ben.

“I think a lot of people and companies are concerned about the data they’re inputting to the model providers,” says Ankush—and rightfully so. But he also notes that “security standards are coming up to par” as some AI systems are “backed by large companies with high security standards.”

Still, some third-party compliance reviews for startups instruct teams not to use any AI tool that’s not self-hosted, at least for now. Ankush suggests that as AI security gets stronger, decisions about whether to deploy on-prem may come more from “a performance perspective—if they have the infrastructure to support,” though we may not be there just yet.

“We’re in the type of industry that takes a while to build trust and not very long to lose [it].”

In these early(ish) days, especially, an organization’s decision to employ a certain vendor or use a given tool involves placing significant faith in them to maintain high standards of protection and confidentiality—and it’s your responsibility to do full diligence, read the fine print, ask good questions, (“Does the vendor solution treat private data the same way we would treat data internally?” Chunyan suggests), and get it right.

Of note: To securely deploy LLMs to their teams, Pinterest, Encamp, and LangChain build applications using Retool as their development platform. (Retool comes with SSO, RBAC, audit logs, and governance.) They connect their data plus any LLM or AI API, and build custom apps with their preferred blend of drag-and-drop and code.

On separating production use cases from hype demos

In an environment filled with excitement and buzz, it can be challenging to discern what’s real, what’s achievable, and what’s just hype. The AI leaders we spoke to advise being pragmatic, patient, and discerning—and getting down to building to see what’s possible.

“The excitement in the AI space sometimes gives people the impression that it’s really easy to build AI systems, but it’s really hard, right?” says Ankush. “Talking to people you trust who have been in the space for a while, who are smart and generally knowledgeable about the trends can provide you with good opinions on what will stay and what will not.”

“We're trying to solve customer problems and not necessarily be up to date on, you know, what just happened yesterday. I think the same old rules for adopting software packages still apply to an extent: if it just came out last week, ignore it. Wait and see if it has a little bit of staying power,” says Ben.

“We actually start to build things, [which helps] cut down the noise quite a bit. And so internally we host very interesting demos—and that’s really our product being built… By actually encouraging people to go ahead and do things, it helps us to understand what the technology could actually do,” adds Chunyan.

“By actually encouraging people to go ahead and do things, it helps us to understand what the technology could actually do.”

The trio’s recommendations? Interact with trusted individuals in the field, dogfood applications, and push for action like hackathon participation to help your team understand the technology more earnestly before diving in headfirst.

On iterative testing in AI deployments

Alright—so how should we think about understanding performance and outcomes?

“As more and more options become available to developers, it becomes increasingly important to have the tools necessary to evaluate how one version of your application is doing against another,” says Ankush. (Case in point: Pinterest employs an A/B testing platform to evaluate the influence of machine learning models on user engagement.)

Continuous feedback, refinement, and adaptation are also crucial when deploying AI applications. This kind of iterative approach helps to manage the undeterminable nature of AI and ensures alignment with user needs.

Ankush details how he thinks about it:

  • “I think there are a few phases of AI application development: one is prototyping, and sort of the de-bugging. This is when you have complex autonomous agents and things can go wrong at any step and errors tend to cascade. It’s important to get a highly inspectable view of what’s going on.”
  • “Step two is getting your AI application ready for production. This is refinement, and when data set curation is really important to get some gold standard reference inputs and outputs, or outputs on how your system should be performing based on certain inputs.”
  • “Once you deploy your application, it’s important to have [a] constant feedback cycle and collect user feedback and get a sense of how your system is performing out in the real world.”

Once that’s all said and done, Ankush adds: “An interesting developer workflow that we’ve heard from lots of people is they’d like to… find the outputs of their AI application that tend to do the best among users, maybe save that to some dataset. And then use those as [for instance], a few examples on their prompts.”

On the role of open source in AI development

Companies large and small have been impressively quick to learn and deploy cutting-edge AI using their internal teams and open source. For example, an engineering leader could spin up their own LangChain and Chroma infrastructure for free and build AI tech themselves.

For LangChain, open-source is fundamental to giving developers the toolkit they need to build. “[LangChain] is formed around the open-source Python and test libraries we provide,” says Ankush. “Everything we build is an effort to complement the open-source library to give developers extra tools in their toolkit to do the things they need with generative AI. We’re still making a huge investment in open source.”

Open-source contributions can make AI better for everyone. Chunyan highlights that Pinterest built an open-source Query Book, a big data IDE with a notebook interface that aims to make composing queries, creating analyses, and collaborating with others simpler. “It has been pretty successful, and it’s an area that we want to continue to double down on,” says Chunyan He notes that the internal team has been “looking at how to add some of the [AI] capabilities,” and the internal demos have been working “beyond our expectations so far.” The hope is to be able to announce an open-source components assistant soon. 🤞

Plus, since open-source software fosters developer community it can feed back into the product improvement process—which means the latest trends in AI and machine learning can be further discussed, debated, and tested. Expect to see more helpful open-source AI projects.

What’s next for engineering teams and AI?

There’s a lot of optimism regarding the role AI can play in increasing development velocity and enabling developers in new ways. The key for technical leaders is to look for ways to use AI to gain a competitive advantage while developers vet tools to find the best fit for their needs. Working through these decisions is no easy task, and involves weighing costs, risks, and potential gains down the line with each AI tool you explore.

“Typically, the top question is whether a solution adds incremental value to the business—and what’s the cost associated?” says Chunyan. “We’re also trying to look at it from a competitive landscape perspective. For example, if we’re going with a certain vendor or a provider, we’re placing our trust in them and essentially helping them grow within the industry.”

While much is in flux, one thing is certain: AI is changing how engineering teams can build and ship products. Coming up with a flexible AI strategy will be critical to making the most of it.

Want to dig further into these topics? Watch the full roundtable.

If you're looking to build secure AI-powered apps and workflows, you can start building them for free with Retool AI—or book a demo.

Retool Team
Retool Team
Remarkably fast
Jul 20, 2023
Copied