A deep dive into the strategies I learned to harness the power of large language models
Last month, I had the incredible honor of winning Singapore's first GPT-4 Rapid Engineering Competition, which brought together over 400 brilliant entrants and organized by the Singapore Government technology Agency (GovTech).
Rapid engineering is a discipline that combines art and science: it is both technical understanding and creativity and strategic thinking. This is a compilation of the quick engineering strategies I've learned along the way that power any LLM to do exactly what it needs to do and more!
This article covers the following, where refers to beginner prompting techniques while refers to advanced strategies:
1. () Structuring prompts using the CO-STAR framework
2. () Section requests using delimiters
3. () Creating system messages with LLM guardrails
4. () Analyze data sets using only LLM, without plugins or code —
With a practical example of analyzing a real-world Kaggle dataset using GPT-4
Effective rapid structuring is crucial to getting optimal responses from an LLM. The CO-STAR framework, the brainchild of GovTech Singapore's data science and artificial intelligence team, is a useful template for structuring guidance. It considers all the key aspects that influence the effectiveness and relevance of an LLM's response, leading to more optimal responses.
Is that how it works:
(C) Context: Provide general information about the task.
This helps the LLM understand the specific scenario being discussed, ensuring their response is relevant.