Long Language Models (LLMs), such as ChatGPT and GPT-4, have aroused a lot of interest in academia and business due to their incredible versatility in various activities. They are also used more frequently in other different disciplines. However, you still need to be fully capable of doing difficult jobs. For example, when writing a long report, the arguments made, the evidence offered to support them, and the overall structure can only sometimes live up to expectations in certain user contexts. Or, when acting as a virtual assistant to complete work, ChatGPT may only communicate with users as intended or even act inappropriately in certain professional settings.
LLMs like ChatGPT require careful and fast engineering to be used effectively. The more unpredictable the responses and the longer the refining of applications, the more difficult application engineering can be when LLMs are asked to perform complicated tasks. There is a delay between giving clues and getting answers; people need access to create answers. To bridge this gap, Microsoft researchers suggest a new human-LLM interaction pattern called low-code LLM, which relates to low-code visual programming, such as Visual Basic or Scratch.
Six easy actions specified in an automatically produced workflow, such as adding or deleting, dragging graphics, and editing text, allow users to check complicated execution procedures. As seen in Figure 1, the following LLMs can interact with humans: (1) A planning LLM that creates a highly organized process for challenging activities. (2) Users modify the process using built-in low-code actions that support clicking, dragging, or editing text. (3) A running LLM that produces results using the procedure that has been tested. (4) Users keep modifying the workflow until they get happy results. Long content creation, large project implementation, task completion virtual assistants, and integrated knowledge systems were four challenging tasks Low-code LLM was used for.
These examples show how the suggested architecture allows users to easily manipulate LLMs for challenging tasks. Low-code LLM provides the following benefits over the typical human-LLM interaction pattern:
1. Generating under control: Workflows are used to communicate complex tasks to people after they have been broken down into organized driving plans. For more manageable results, users can manage the execution of the LLMs through low-code operations. The responses that occur after the custom procedure will be closer to the needs of the user.
2. Friendly Communication: Users can quickly understand the execution logic of LLMs according to the workflow intuition and can easily adjust the workflow thanks to its low-code operation through a graphical user interface. This reduces the need for time-consuming rapid engineering and allows users to effectively translate their thoughts into comprehensive instructions to produce high-quality solutions.
3. wide range of use: The suggested paradigm can be used for various challenging tasks in various areas, especially where human judgment or preference is crucial.
review the Paper. Don’t forget to join our 19k+ ML SubReddit, discord channel, and electronic newsletter, where we share the latest AI research news, exciting AI projects, and more. If you have any questions about the article above or if we missed anything, feel free to email us at [email protected]
🚀 Check out 100 AI tools at AI Tools Club
Aneesh Tickoo is a consulting intern at MarktechPost. She is currently pursuing her bachelor’s degree in Information Science and Artificial Intelligence at the Indian Institute of Technology (IIT), Bhilai. She spends most of her time working on projects aimed at harnessing the power of machine learning. Her research interest is image processing and she is passionate about creating solutions around her. She loves connecting with people and collaborating on interesting projects.