TC logo

Use LLMs

only if you have to!

confusion

About:

When solving traditional problems like prediction and classification, the answer to the question, “When do you need an ML model?” was clear: “We need ML models for problems that cannot be solved with engineering and heuristics.”

With the rise of (really large) LLMs this fundamental understanding is becoming less common. People are increasingly using LLMs even for tasks like parsing and other use cases that could be effectively solved with simple engineering implementations, which offer greater control.

This write-up is a reminder of the basics, emphasizing that LLMs should be reserved for situations where they are truly necessary.

Questions before choosing LLM architecture:

These are some of the questions which would be helpful in avoiding abuse of LLMs. This question is needed at multiple levels. Ie., you can apply this for the whole problem as well as for each sub-problem that you might have to solve.

  1. Can the problem be solved using reliable engineering solutions? Many problems could be simplified and the solutions can be made more reliable by opting to engineering solutions rather than relying on LLM.

  2. Can the problem be solved using heuristics? Many problems can be simplified by choosing the right domain specific heuristics.

  3. Can the problem be solved using user input? Many problems would be less costly when we let the user solve rather than opting for an LLM to solve it.

Context from a real problem:

At TechConative, we’re developing P2C, a platform that simplifies UI development by enabling developers to create functional web apps using MUI, directly from sketches or text prompts. Our aim is to accelerate the development process while ensuring high-quality, responsive designs. Although the project is in its early stages, some important design and architectural decisions have been made that will aid in scaling the solution.

To get things more concrete, Let’s consider the following use case in GenAI arena of P2C where the user says,

  1. Add a header.

  2. Add a table to display First Name and Email id.

  3. Add a column to display Second Name.

Problems to solve:

In the aforementioned context the below are some of the problems(examples following) that needs to be solved,

  1. Generating the MUI elements that the user intends to generate.

  2. Understanding the implicit context for the user operations.

  3. Resolving possible ambiguities while executing user intents.

Navigating nuanced complexities:

Can the problem of ”MUI elements generation” be solved with engineering or heuristics?

Let’s see what we mean by this.

When the user says, “Add a table with m columns and n rows”, to achieve the goal, we have to generate the Material UI code for the table, rows and columns, including the imports for the components and other code required for it.

But, we don’t need an LLM to do “ALL” that has been mentioned here.

For the user intent of adding a table, we have LLM decodes the intent and responds with a simple JSON like:


[{ "add": [{ "type": "table", "row-count": n, "column-count": m }] }]

And from there all these are engineering problems to solve, which can be solved with a greater control.

  1. Generating the MUI component that matches the current style.

  2. Using the component in the right place.

  3. Generating the imports.

Can the problem of ”Understanding implicit context” be solved with engineering or heuristics?

To a great extent, yes.

Let’s consider an example of “implicit context.” When a user says, “Add a column for <some-value>,” we need to understand where the user intends to perform this operation.

Our system provides the LLM with a summary of the operations performed, the current state, and the user query. If the LLM can help with implicit context, that’s great. However, we don’t rely on the LLM entirely. We use heuristic configurations to solve the problem. For instance, if the LLM decodes the intent of “adding a column,” we have a config that checks if a table is present on the page. If a table is found, “Bingo!”—we’ve understood the context even if the LLM missed it.

Can the problem of ”resolving ambiguities” be solved with engineering or heuristics?

Yes, to a large extent.

For example, if the user intends to “add a column” and there are multiple tables on the page, this creates ambiguity. We resolve this by checking the user’s operation history. If the last operation was on a specific table, it’s likely that this is the table the user intends to modify.

Other examples,

Handling Typos: Our code generation agents handle typos in LLM results by employing fuzzy matching techniques. This could specifically be useful when a certain action is meant on existing “states” that the user wishes to leverage on. Mitigating Hallucinations: To mitigate hallucinations, we use configuration to get the default attributes of the elements rather than relying on the LLM to provide these.

These are a few examples of how we enhance reliability by using engineering and heuristics alongside LLMs where necessary.

As a side-note, we were able to achieve practically usable results with a fine-tuned version of a modest model(Code-LLAMA 7B). This is possible because we are doing most of the heavy-lifting with engineering and heuristic rather than using an LLM to solve all the problems

Summary,

Our guiding principle is: “Don’t use an LLM unless you really have to.” This approach leads to cost-effective, scalable, and reliable solutions over which you have complete control. This write-up is a reminder of a fundamental truth: “You don’t need an ML model when simple engineering and logic can solve the problem.”

mask_log0
mask_logo

Take Charge of Your AI Engineered Success with TechConative.ai

Unlock the potential of your projects with TechConative.ai. Reach out today to explore how our innovative AI-driven solutions can turn your ideas into impactful realities.

TClogo

TechConative.ai can enable your business with AI-engineered solutions that are innovative, intelligent, ideal, and impactful.


logo

TechConative

All Rights Reserved

TC logo

Privacy | Terms and Conditions

logo

TechConative.

 All rights reserved