Introduction: Charting your course in the age of AI
Artificial intelligence remains a buzzword, and exhibition floors at conferences can sometimes feel like luxury car showrooms—flashy and impressive, but with no touching allowed. Across functions in an organization, particularly those in learning, talent development, and training, the integration of AI represents both a significant opportunity and a daunting challenge. The real task lies not just in adopting AI but in selecting the right AI tools in a chaotic market full of flashy over-promises.
This guide equips professionals with the critical questions and knowledge needed to select the right AI tools for pilot projects and promising use cases, while effectively identifying and avoiding tools that offer little more than clumsy 'chatbot wrappers' around outdated technology.
Following on from a previous article, on working with AI vendors, that I co-authored with Paige Chen, this update provides professionals with a deep dive into the critical considerations that should inform AI vendor conversations and ultimately the selection process.
Understanding AI types: The foundation of strategic alignment
The first step in evaluating any AI solution is understanding the type of AI technology that underpins it. Broadly speaking, AI can be categorized into generative and non-generative types, each with distinct capabilities and applications.
Generative AI: The engine of innovation
Generative AI is at the forefront of technological advancement, creating text, images, audio, and video. It is not merely a tool for automating tasks but a creative force that can generate new content. The most notable of these generative AI tools are large language models (LLMs).
Non-generative AI: The optimizer of existing resources
In contrast, non-generative AI excels at analyzing and optimizing existing data. It's the silent workhorse that powers adaptive learning platforms, recommendation engines, and analytics tools. Non-generative AI can sift through vast amounts of data to identify trends, recommend content, and personalize learning experiences based on each individual's existing skills and progress.
Rather than simply choosing between generative and non-generative AI, it's essential to understand how each type is utilized in the vendors' products. By recognizing their distinct strengths, weaknesses, and potential impact on your projects, you can make informed decisions and avoid investing in tools that may ultimately be ill-suited for your intended use cases.
Data as the core of AI: Ensuring quality & relevance
Data is the foundation of AI, directly impacting its effectiveness. Evaluating AI tools requires scrutinizing the data used for training, ongoing improvement, and real-time operation.
Assessing data quality & relevance
The effectiveness of an AI system hinges on the quality and relevance of its data. A model trained on diverse, high-quality data will perform more reliably across various scenarios. Conversely, if the data is too general or misaligned with your use case, the AI's outputs may be inaccurate or even irrelevant.
Beyond initial training: Ongoing data access
Understanding what data the AI system can access for further training and how it integrates new information is crucial. This ongoing data input significantly impacts the AI's ability to adapt and improve over time.
Operational data considerations
Also consider the data the AI accesses during its real-time operations. Real-time data processing is critical for delivering accurate, up-to-date results. However, if this operational data is flawed—outdated, biased, or incomplete—the AI's outputs can become not just ineffective but potentially harmful.
To avoid these pitfalls, ask vendors about their data sources:
- Where does the data come from?
- How is it collected, processed, and updated?
- What mechanisms are in place to ensure its continued relevance and accuracy?
Additionally, consider how additional data your organization already possesses might further improve the tool's functionality—and when all data sources are considered together, identify if there are still gaps for the use case you intend to run.
Understanding these aspects will help you determine whether the AI solution is built on a robust foundation of high-quality data, both at the outset and throughout its use.
Leveraging AI's strengths & understanding its functional pillars
Every AI tool operates within a straightforward yet powerful framework, comprising three core pillars: the data it accesses, the way it processes that data, and the outcomes or next steps it produces. Understanding these pillars is crucial for evaluating how an AI tool will perform in your specific projects and use cases.
1. Data access: The foundation of AI
The first pillar is the data that the AI engine has access to. This includes not just the initial training data but also any data the system interacts with during operation. The quality and relevance of this data directly influence the AI's ability to perform its tasks effectively.
2. Data processing: The core mechanism
The second pillar is how the AI processes this data. This is where the engine's algorithms come into play, determining how the data is interpreted and utilized. Different AI tools will have varying processing methods, depending on their design and intended use. Understanding this mechanism helps in evaluating how the tool will handle your specific data and tasks.
3. Output & next steps: The resulting action
The final pillar is the output generated by the AI—what the system produces as a result of its data processing. This could be a recommendation, an action, or any other form of output depending on the AI's function. The effectiveness of an AI tool is often judged by the accuracy and relevance of its outputs.
Applying the pillars to evaluate AI tools
By breaking down AI tools into these three pillars—data access, data processing, and output—you can gain a clearer understanding of how each tool operates. This approach demystifies the technology, allowing you to assess its strengths and limitations in a structured way. Instead of viewing AI as a black box, you can evaluate each tool based on how well it handles the data, processes it, and produces meaningful outcomes.
Understanding these functional pillars will enable you to choose AI tools that are well-suited to your specific use cases, maximizing their benefits while minimizing potential risks. This framework also facilitates more informed conversations with vendors, ensuring you select tools that genuinely meet your needs.
Security, ownership & ethical use of your data: A comprehensive approach
The data the AI tools use is not the only consideration. Their approach to data security, ownership, and ethical use of your organization's data are critical considerations when evaluating AI vendors. Here's how to ensure that your data is managed with the utmost care and compliance.
1. Compliance with industry standards
A vendor's adherence to recognized standards like SOC 2, ISO 27001, GDPR (General Data Protection Regulation), and CCPA (California Consumer Privacy Act) is a strong indicator of their commitment to data security. These standards cover essential aspects such as encryption, access controls, and data management practices, ensuring that your data is protected at every stage.
2. Secure data storage & management
Effective data storage is not just about location but how the data is managed:
- Ensure your data is stored in a secure, compliant environment, isolated from other clients' information in multi-tenant setups
- Consider the legal implications of the physical storage location: Different jurisdictions have varying data protection laws that could impact data accessibility and security
3. Encryption & data protection
Encryption is fundamental to safeguarding data, both at rest and in transit. Verify that the vendor's encryption methods meet industry standards and that they conduct regular audits and penetration tests to identify and address potential vulnerabilities.
4. Logging, monitoring & event management
A robust logging and event management system is crucial for tracking user activity and detecting anomalies. Ensure the vendor has such systems in place, complying with standards like SOC 2 and ISO 27001, to quickly identify and respond to security incidents.
The duration of log retention should align with your organization's data management policies and legal requirements, ensuring that records are available for audits and investigations when necessary but not retained longer than needed, reducing potential risks.
5. Data ownership & access rights
Clearly define data ownership to understand your rights regarding data use, sharing, and monetization. This includes:
- Clarifying access rights within your organization and for the vendor, ensuring that sensitive information is accessible only to those who need it
- Confirming that access controls are robust, particularly in multi-tenant environments where data isolation is essential
6. Ethical use of demographic data
Demographic data must be handled with transparency and care:
- Ensure the data is used ethically, enhancing learning outcomes for all participants without reinforcing biases
- Implement strict retention policies, retaining data only as long as necessary and ensuring anonymization to protect privacy
7. Data retention & anonymization
Retention policies should align with legal and ethical obligations, ensuring that data is kept only as long as necessary. Anonymization is key to protecting individual privacy, though it's important to recognize the limits of anonymization and implement additional safeguards to prevent re-identification.
8. Comprehensive data practices review
A thorough review of a vendor's data practices should encompass:
- Storage compliance with regulations across jurisdictions
- Effective anonymization and protection against re-identification
- Strong access controls and regular audits to prevent unauthorized access
- Clear data retention policies and ethical data usage practices
Conclusion: Navigating AI tool selection amidst market chaos
In a marketplace as dynamic and chaotic as today's AI landscape, it's more critical than ever for professionals to stay informed and discerning. The ability to cut through the noise and make strategic decisions about AI tools through well-informed vendor conversations is a crucial skill.
This guide has emphasized the importance of thoroughly evaluating AI tools by understanding their core strengths and weaknesses. Whether it's assessing the type of AI, the quality of the data, or the security and ownership of that data, these considerations are vital in determining how well a tool will meet your specific needs. Recognizing these factors allows you to select tools that are not just innovative but are genuinely aligned with your organizational goals and operational requirements.
By focusing on these key aspects—AI types, data quality, security, ownership, and ethical data practices—you can ensure that your organization is not only prepared for the future but also positioned to thrive amidst the noise. The choices you make today will shape your organization's ability to innovate and adapt in a landscape where the latest trends often overshadow what truly matters.
Professionals who can confidently evaluate AI tools, understand their potential and limitations, and stay current with market developments will be the ones driving value and ensuring their organizations remain at the forefront of innovation.
Explore AI at our full-day symposia
Learn more from Markus Berhnardt and other expert speakers this fall. The Learning Guild is hosting a full-day deep dive into AI at each of our events! Join us at:
- The AI & Learning Symposium, November 5, 2024, co-located with DevLearn 2024 Conference & Expo, November 6-8, 2024, in Las Vegas
- A Learning Leader's Guide to AI, December 3, 2024, co-located with Learning 2024, December 4-6, 2024, in Orlando
Following the symposium, stay for the full conference. Each event features a range of sessions addressing uses for AI-powered tools, strategies for integrating AI into your workflow, and much more. Register today!