Problem
NGO chatbots using LLMs frequently receive non-question inputs (e.g., "hello", "thanks"), which trigger unnecessary model responses. This leads to poor user experience and inefficient token usage.
Proposed Solution
Introduce a classification layer in the AI Platform that filters non-question inputs before they reach the LLM. This will involve:
-
Ingesting labeled datasets
-
Fine-tuning an OpenAI model(s) using this data
-
Evaluating the models and getting the best one out
Additional context
This builds on prior work done for the SNEHA Small Talk project and generalizes it for broader use within the AI Platform ecosystem. The implementation will be modular and integrated into the existing FastAPI backend to support multiple use cases.
Reference docs: Classification flow doc , Semantic routing , Classfication plan
Problem
NGO chatbots using LLMs frequently receive non-question inputs (e.g., "hello", "thanks"), which trigger unnecessary model responses. This leads to poor user experience and inefficient token usage.
Proposed Solution
Introduce a classification layer in the AI Platform that filters non-question inputs before they reach the LLM. This will involve:
Ingesting labeled datasets
Fine-tuning an OpenAI model(s) using this data
Evaluating the models and getting the best one out
Additional context
This builds on prior work done for the SNEHA Small Talk project and generalizes it for broader use within the AI Platform ecosystem. The implementation will be modular and integrated into the existing FastAPI backend to support multiple use cases.
Reference docs: Classification flow doc , Semantic routing , Classfication plan