How GetFocus Works
Prompting like a Pro
21 min
effective prompting is the foundation of getting useful results from any ai powered feature in getfocus regardless of which feature you're using, one principle applies everywhere 🗝️ be clear and specific about your requirements and the output you expect 🗝️ be clear and specific about your requirements and the output you expect llms have evolved significantly, and with improved reasoning capabilities, there's no longer a need for overly complex prompts or assigning roles like "you are a market analyst " instead, focus on clarity and precision prompting in general chat prompting in general chat use general chat to explore topics, do quick research, or brainstorm ideas the same clarity principle applies the more focused your question, the more useful the response technology scouting technology scouting when using the technology scouting feature, the way you phrase your prompt has a major impact on the results if you have access to the agent , it will guide you through defining your problem statement automatically — you don't need to craft the initial scoping prompt yourself if you don't have access to the agent, follow the manual prompting steps below the system can categorize a wide range of technologies, from established to emerging and niche, but the specificity of your prompt determines how broad or focused the returned list will be here are two examples to illustrate this 🔋 example 1 battery technologies general prompt "energy storage technologies" returns a broad range of technologies such as lithium ion batteries, flow batteries, supercapacitors, and mechanical storage systems specific prompt "battery chemistries" focuses the results on different chemical compositions, such as lithium sulfur, sodium ion, or solid state batteries highly specific prompt "battery chemistry for electric aviation" narrows the scope even further to technologies optimized for weight, energy density, and safety in aviation use cases each level of specificity results in a different depth and focus of technologies 🧬 example 2 medical diagnostics general prompt "medical diagnostic technologies" returns a wide set of categories including imaging technologies, wearable sensors, biosensors, and lab on a chip devices highly specific prompt "diagnostic technologies for early cancer detection" targets a narrower set of tools such as liquid biopsies, circulating tumor dna (ctdna) analysis, and ai based imaging solutions again, a more focused prompt will guide the system toward more targeted, relevant technologies once you have a list of technologies, a common next step is down selection before comparing, check whether the technologies are competing or complementary complementary — work together in the same application (e g , battery management systems alongside lithium ion batteries) competing — serve the same purpose in different ways (e g , lithium sulfur vs sodium ion batteries) also compare like with like avoid putting a single technology (e g , solid state batteries) head to head against a broad category (e g , energy storage technologies) — the category contains dozens of specific solutions, making the comparison misleading useful prompts for structuring comparisons "identify which of the scouted technologies are competing alternatives for energy storage " "analyze the list of scouted technologies and divide them into two categories (1) specific technologies — clearly defined, individual solutions (2) group/umbrella technologies — broad categories that include multiple specific technologies present the results in a table with columns category, technology name, rationale for classification " "analyze the list of scouted technologies and organize them into categories that are mutually exclusive and collectively exhaustive (mece) ensure each technology belongs to only one category and all technologies are included output a table with category name, included technologies, short rationale for grouping " by framing prompts with this distinction in mind, you’ll generate more meaningful scouting insights and avoid misleading conclusions when it comes to evaluation, there are two main approaches, depending on how much control you want to retain ✅ option 1 give more control to the llm if speed is your priority, you can let the model take the lead for example prompt example "evaluate all technologies for the following application area energy storage for ev vehicles " this approach is fast but gives the llm broad freedom in how it interprets and structures the evaluation ✅ option 2 stay in control if you prefer a more structured and tailored evaluation, include specific instructions in your prompt for example define evaluation criteria (e g , pressure resistance, weight, emissions) request scoring or binary values (e g , 1–10 scale, or yes/no) specify the output format for clarity (e g , a table with scores and reasoning) prompt example "evaluate all technologies with a score from 1 to 10 based on their ability to withstand high temperatures provide the output in a table format, including a short explanation for each score " this way, you remain in control of the evaluation framework while still leveraging the llm’s reasoning capabilities write effective patent search queries write effective patent search queries searching in getfocus is straightforward there are multiple ways to find relevant patents (read more in docid\ a76atae2twqap3j9qm0w5 ) however, when searching by technology name, it’s worth giving extra thought to your query patents are legal documents that often cover multiple applications, materials, or chemistries the depth of your query determines the size and focus of your results 💡 broader queries are useful in emerging domains where overly specific language might exclude relevant patents that use different technical terminology let’s take a look at some search query examples very specific “hydrometallurgical recycling of lithium ion batteries for recovery of cobalt and nickel ” very narrow only patents that match this precise combination will appear risk results may be too few, especially in niche or emerging domains use this approach when you want to quickly check whether a technology is already patented for a specific material, use case, or application specific “hydrometallurgical recycling of lithium ion batteries ” broader than the previous one, retrieving more patents for an overview of relevant inventions best for building a dataset when you want a full picture of how a technology is applied to a specific use case works well for established technologies where enough data exists broader “hydrometallurgical recycling of batteries ” returns a large and varied set of patents, including some only loosely connected to your original intent useful for scouting emerging or niche domains, where being too specific might exclude valuable results advantage increases the chance of discovering unexpected but relevant insights, since patents often use broad legal and technical language with these examples in mind, you can adjust your query detail depending on what your goal is writing writing llm filter p rompts rompts another key area where prompting skills are important is when filtering datasets with the llm filter (for an overview of what the llm filter is and how it works, see the article “ docid\ wi3kku8jft0fpwd7xnrnz ” ) best practices and common mistakes best practices and common mistakes as with any other prompt, clarity in your instructions is essential the way you phrase your filter prompt directly impacts the quality and precision of the results even when you aim for clarity, it’s easy to fall into a few common prompting mistakes that can confuse the model or weaken your results common mistakes to avoid common mistakes to avoid confusing instructions the prompt mixes multiple ideas or unclear wording, leaving the model uncertain about what to prioritize vague instructions the prompt lacks precise terms, definitions, or measurable criteria, leading to broad or inconsistent results contradictions the prompt includes conflicting requirements (e g , “include a but not a like items”), making it impossible for the model to follow a single logic incomplete instructions essential details such as scope, desired format, or evaluation criteria are missing, forcing the model to guess assuming the llm can read your mind the prompt relies on implied intent instead of explicitly stating expectations and boundaries b b est practices est practices for writing effective llm filter prompts start with a clear filtering command example “find/only include patents/inventions that…” use strong, directive phrasing prefer words like “explicitly,” “specifically,” "primary," “focus on,” or “discuss ” avoid vague verbs like “mention,” since patents often reference technologies without making them the central subject combine technology and use case requirements example “the patent must explicitly state both the technology (lithium sulfur batteries) and the use case (electric aviation) ” define what counts as “relevant” and "irrelevant" example “include patents that specifically propose a novel technical solution, not just incremental design variations ” use exclusion rules for precision example “if not all conditions are met, exclude the patent ” this enforces strict filtering and ensures only highly relevant results are returned example “exclude patents focused solely on consumer electronics, unless they directly address energy storage at grid scale ” llm filter prompts examples llm filter prompts examples to make these best practices more concrete, let’s look at a few examples of bad prompt versus well structured ones ❌too vague "show me patents that mention batteries " likely to return irrelevant results where batteries are only referenced in passing ✅ clear and directive "find patents that explicitly discuss lithium ion battery technologies " focuses results on patents where lithium ion batteries are the main subject ✅ multi condition with strict exclusion "only include patents that explicitly state both the technology (solid state batteries) and the use case (electric aviation) if both conditions are not met, exclude the patent " includes multiple conditions for precision uses strict exclusion logic to filter out noise ❌ ambiguous scope "look for battery chargers with some kind of smart control, unless they’re just regular ones include car chargers but not the other kind focus on fast charging, like really quick ones, but it’s mostly about how it handles power, or something like that " uses undefined terms ( “some kind of smart control,” “regular ones,” “the other kind” ) that make classification ambiguous mixes conflicting scope instructions ( “include car chargers but not the other kind” ) without clarifying category boundaries lacks measurable criteria ( “really quick ones” ) that would allow consistent filtering provides no clear analytical focus ( “it’s mostly about how it handles power, or something like that” ), leaving interpretation open ended ✅ structured with inclusion/exclusion rules "task determine if the patent is about battery chargers with closed loop (sensor feedback) smart charging control (not basic constant voltage or timer based chargers) include if \ control/feedback uses closed loop control (e g , current/voltage feedback via sensors or microcontroller); adaptive charging that adjusts parameters based on battery state (e g , soc, temperature, or impedance) \ mechanism controller modulates charging current or voltage via pwm, mosfets, or switching regulators; includes monitoring of charge stages (constant current, constant voltage, trickle) \ components charger circuitry with microcontroller or charge management ic, sensors for voltage/current/temperature, and communication interface (display, leds, or app feedback) exclude if \ simple linear or constant voltage chargers with no feedback or adaptive control \ chargers designed solely for industrial energy storage or ev stations (outside small device context) " defines the target concept precisely and includes clear inclusion/exclusion rules differentiates closely related domains (smart feedback based chargers vs simple constant voltage chargers) adds synonym hints and technical cues (e g , multi stage charging, feedback sensors, adaptive control) to stabilize classification and reduce ambiguity 🧩 template for llm filter prompt template for llm filter prompt writing an effective llm filter prompt comes down to precision, structure, and clear logic the best prompts define exactly what to include and exclude, give the model measurable cues to work with, and anticipate edge cases where results could be ambiguous use the template below as a practical starting point for building consistent, high quality llm filters task classify patents about \[clearly define the target concept or technology] in \[specific domain or application area] include if \[describe the mechanisms, components, or functional features that qualify a patent as relevant; specify any measurable thresholds or numeric cues] exclude if \[list adjacent domains, unrelated technologies, or cases missing required features that should be filtered out] signals to accept \[provide synonyms, or technical phrases that indicate relevance] edge cases \[describe how to handle borderline examples; define a tie break rule such as “include only if explicitly stated” or “exclude if ambiguous”] prompting for chat with set and chat with invention prompting for chat with set and chat with invention another area where prompting skills are essential is when performing deep dives using chat with set (cws) or chat with invention (cwi) prompting tips for chat with set prompting tips for chat with set responses in chat with set are drawn from your patent dataset during longer conversations, the model may also draw on general knowledge — use anchoring phrases to keep results grounded in the data "according to the set…" "based on the patent data…" example prompts to help you get started, here are a few ideas for prompts you might try in cws “according to the set, what are the key innovation drivers of \[technology name]?” “what are the recent advancements in \[technology x] according to the set?” ⚠️ the model decides on its own what counts as "recent " for precise date ranges, either specify years in your prompt (e g , "between 2018 and 2023") or apply a publication date filter before starting the chat "according to the set, compare the patent portfolios of \[organization a] and \[organization b] in the field of \[technology name] identify white spaces by highlighting areas where one organization is active but the other has limited or no coverage provide output in a table " " according to the set, what are the main technical challenges in the development and application of \[technology name]? summarize the challenges in a structured list, and for each one provide a short explanation based on the patent data " prompting tips for chat with invention prompting tips for chat with invention when it comes to chat with invention , the main recommendation remains the same keep your prompts clear and directive here are some examples to help you get started "list the main claims of this patent and summarize each in one sentence " "list the materials mentioned in this invention provide output in a table with columns\ material, role, section of the patent where it appears " "based on the claims, what aspects of this invention appear to be novel compared toconventional lithium ion batteries?" "according to this patent, what potential application areas are explicitly described or implied?" "summarize how this invention could be applied in electric aviation, including any technicaladvantages mentioned in the patent text " "what technical challenges are acknowledged or suggested in this patent regarding scalability of the invention?"

