A feedback loop (also known as closed-loop learning) describes the process of leveraging the output of an AI system and corresponding end-user actions in order to retrain and improve models over time. The AI-generated output (predictions or recommendations) are compared against the final decision (for example, to perform work or not) and provides feedback to the model, allowing it to learn from its mistakes.
In AI, machines learn how to execute tasks that are typically performed by humans. Like humans, AI systems make mistakes during their infancy and need a feedback loop to confirm or invalidate their decisions.
Feedback loops allow AI systems to know what they did right or wrong, giving them data that enables them to adjust their parameters to perform better in the future. In the C3 AI Reliability application, operators can prioritize maintenance actions based on risk scores and trigger work orders. If users disagree with the application’s recommendations, they can log their decisions to help the system do better next time.
AI systems need to adapt to evolving data or new patterns that appear over time. A feedback loop reinforces the model’s training with fresh data. In the C3 AI Anti-Money-Laundering application, it is crucial to incorporate the latest typologies and theft modes using a closed-loop workflow to improve predictions.
C3 AI leverages feedback from subject matter experts through collaboration during technical workshops, conducting field tests on live AI systems, and monitoring performance over time. All of this input ultimately feeds back to the model to improve future performance.
C3 AI bi-directional integration with customer databases such as work order management systems enables closed-loop feedback during which AI outputs are presented to end-users in their typical daily workflow and their corresponding actions are recorded and sent back to the application.
C3 AI provides ML Ops tools to easily retrain and deploy models in a feedback loop. Models can be retrained automatically or on demand, based on model performance drift or availability of additional training data. New models can then be deployed as challenger artifacts along with existing models, or champions, in order to track performance on live data before promoting models into production.