Links
As a cooperative technology service provider, CU*Answers is always searching for efficient, cost-effective methods for serving our clients and their members. Our employees are encouraged to use Artificial Intelligence (AI) tools, provided those tools are used in a responsible, ethical, and transparent manner.
To assist credit unions with their own due diligence, we have provided a list of approved AI tools used in the development of products and services. In addition, credit unions can review and download our AI Policy. If you have questions about the use of AI, you can contact the Compliance and Risk Team.
Approved AI Use Cases:
| AI Name | Purpose | Safeguards | Generative Controls | Quality Assurance |
|---|---|---|---|---|
| Claude.AI | Contributing software code, as well as enhancing QC evaluation and testing practices. | All accounts will be required (if not globally restricted) to be configured to not allow training on our content. With that configured, provided data will not be trained on, and data retention with the vendor will follow the timeframes specified in the due diligence package. Member PII not to be uploaded. | All content is to be reviewed to ensure no reuse or inclusion of copyrighted work. | HITL will be utilized for both review and testing of any AI generated production code. While QC operations may be enhanced by AI, humans will be required to validate the testing and/or functionality of any developed (manual or AI) software as new products and features are added or modified. |
| Claude.AI | Identify technology, develop process and standards, create training tools for the rest of the company. | All accounts will be required (if not globally restricted) to be configured to not allow training on our content. With that configured, provided data will not be trained on, and data retention with the vendor will follow the timeframes specified in the due diligence package. | All content is to be reviewed to ensure no reuse or inclusion of copyrighted work. | HITL will be utilized for both review and testing of any AI generated production code. While QC operations may be enhanced by AI, humans will be required to validate the testing and/or functionality of any developed (manual or AI) software as new products and features are added or modified. |
| IBM Bob | IBM Bob is an AI SDLC (Software Development Lifecycle) partner that augments existing workflows and helps
programmers work confidently with real codebases. |
IBM Bob has Real-time monitoring checks for the presence of “hateful, abusive language” (HAP) or personally identifiable information (PII) in both the inputs and outputs of the model. | IBM ensures the responsible use of copyrighted material in “Bob” and its underlying models through a “prevention-to-protection” pipeline. This strategy combines strict data curation, real-time technical filters, and legal backing. By default, IBM does not use the proprietary code or inputs to train its foundation models. This ensures that the own copyrighted work doesn’t end up being “leaked” to other users of the AI. Bob uses real-time controls to ensure it doesn’t “leak” copyrighted snippets into the workspace. | Bob does not execute complex tasks in a vacuum. It generates a “Plan” (step-by-step actions) that the user must manually approve before execution begins. |
Contact Compliance and Risk team
Have questions? Reach out to the Compliance and Risk team at CU*Answers
Contact Compliance & Risk











