Audience: Managers and Leaders who need to support their teams adoption of AI tools.
Responsibilities: Instructional Design (action mapping, storyboarding, visual design, prototyping, authoring), eLearning Development, Deployment & Version Control (GitHub + Netlify)
Tools Used: Articulate Storyline 360, Figma, Gemini, Claude, Google Docs, Google Analytics, JavaScript, GitHub, Netlify
The rate at which AI tools are used in today’s workforce has nearly doubled in just the past two years! Yet industry reports point to a lack of training and confidence, unclear expectations, and the associated trust, accuracy, and accountability issues. This module teaches managers how to support their team through the roll out of a new AI tool through one-on-one support, modeling workflow integration, and reporting to upper management.
After gathering data and researching market trends, I recommend using an eLearning solution for many reasons. It allows managers to complete the training as needed, allowing them to use this prior to adopting new AI tools, and supports the remote nature of many workplaces. I recommend using a scenario-based approach to provide an emotional experience to mimic the social & emotional complexities of disrupting a team's engrained workflows with a new tool. Based on recent industry reports, many employees understand the basic functionality of AI chat tools but struggle to understand how to utilize them to improve workflow efficiencies. Therefore, the manager must choose the best strategies to support their teams in adopting the tools beyond simple chat functions and to authentically integrate them into their daily workflows.
To produce the best solution for supporting managers in implementing AI in their workflows, I referenced industry trends, implemented the ADDIE model, and integrated a scenario-based branching module to create a cause-and-effect learning experience.
To dive deeper and produce a relevant and timely learning experience, I used the Deloitte report, “Talent and workforce effects in the age of AI” alongside the IBM report, “Augmented work for an automated, AI-driven world” as the “SME’s” for this concept project.
As part of the analysis process, I reviewed the industry reports to understand the skill gaps that managers must overcome to facilitate engagement amongst their teams. Ensuring that the possible choices reflected the job-task’s of managers and their teams was a critical step in mapping the decisions within the module. Each decision provides three options for the manager to choose, based on common pitfalls in tool adoption. Throughout this module, the manager must choose the best stategies to model the tool's use, hande resistance from their team, monitor and respond to their's team engagement metrics, and in the end sharing their team's engagement results with upper-management.
Each of the three choices have a varying degree of points awarded to the manager based on the level of perceived engagement fostered within their team. Each choice has some merit and therefore earns some points toward improving their teams engagement. The best decision earn 10 points towards the user's final score, a moderate strategy earns the user 5 points and the least effective strategy earns the user 3 points. As the user navigates the module, they accumulate points which correlate with their teams percieved engagement with the new tool. This strategy allows the user to monitor immediate feedback of their stategies with the tool adoption and engagement amongst their team. Regardless of the decision, the user is provided with feedback and insights related to their chosen strategy. If the user doesn't choose the most effective strategy, they are presented with the “best stategy” before moving to the next decision to learn the most effective methods of adoption and engagement.
In the end, the user is presented with three possible outcomes. If a user's total score is less than 50% at the end of the module they are presented with an option to ‘try again’. Users with a total score between 50-75% are offered a modest ‘2% cost of living adjustment’ in their end-of-quarter performance review. And those users who score >75% are awarded with a ‘merit increase of 4-6%’. These three outcomes correlate with their team demonstrating low adoption, engaged, and fully integrated tool adoption.
After completing the action-map, I developed the five different decisions which would cover the various challenges a manager may face throughout this new tool adoption. Each decision framework was modified and adapted to create a scenario-based, emotional, experience for the user. Instead of simply telling the user that the team’s engagement data was lackluster, the user is presented with data dashboards to interpret. And instead of telling the user that a senior team member is hesitant and questioning the tool adoption, the user is presented with a character who communicates their position through text-based conversation and emotional feedback through a series of associated posture changes.
My goal was to immerse the user in a real-life work-based environment where each decision had three possible options which could each be realistic choices to drive team engagement. Instead of providing implementation strategies which stood out as obvious choices, I developed two realistic and competing ideas on how to boost engagement and buy-in from team members.
Without access to an LMS, I needed a way to host the module and still track learner engagement. I exported the project as a SCORM package from Storyline, then deployed it to Netlify via GitHub for version control and easy updates.
To fill the analytics gap, I attached Google Analytics tracking through JavaScript triggers, capturing key interaction points throughout the module. It's not a replacement for a full LMS report, but it gave me meaningful engagement data.
I’m in the process of using the Kirkpatrick Model for training evaluation using data captured by Google Analytics (in place of an LMS) to monitor actual user engagement and completion of the module. I'm monitoring session duration, drop-off points, and replay rates as early engagement signals. I’m also responding to feedback from users.
This project allowed me to explore scene development, using AI tools collaboratively to design and develop each decision, integrating triggers which were multi-layered and responsive to the user’s choices, exploring variable logic sequences, integrating JavaScript as an analytics tool, and integrating unique character states as a tool for emotional feedback.