Project 04

Keylime Interactive
AI + Upskilling

Experience Studio Project

Keylime Interactive AI Upskilling

Designing AI-Powered Upskilling Tools for Early Professionals and Students

Project Overview

In today's rapidly evolving job market, "upskilling"—the process of learning new, relevant skills—is no longer optional. While Artificial Intelligence presents a massive opportunity to personalize and scale learning, its role is often met with a mix of excitement and skepticism. Our team partnered with Key Lime Interactive (KLI), a woman & minority-owned UX/CX agency, to research and identify opportunities in this complex space.

Team

1 UX Graduate Student, 4 UX Undergraduate Upperclassmen, 4 UX Undergraduate Underclassmen

Project Scope

UX Design

Research

AI

Wireframing

Prototyping

Tools

Figma

FigJam

Timeline

January - May 2025

My Contributions

Foundational Research & Synthesis

Primary Research & User Engagement

Collaborative Ideation & Sketching

Mid-Fidelity Prototyping & Testing

User Testing & Feedback Analysis

Documentation & Reporting

Design Space
Problem Statement

Professionals at all levels know they need to keep learning, but the path is often unclear. Early-career professionals feel overwhelmed by resources, while students struggle to bridge the gap between academic theory and real-world application. Companies invest in learning platforms, but these often fail to address the core human needs for guidance, feedback, and practical experience.

Design Question

How can AI be leveraged to create upskilling solutions that are effective, trusted, and address the nuanced needs of both students and professionals?

Project Goals

Communicate our findings after investigating AI and upskilling, and exploring opportunities into where and how they intersect

Based on our findings, design and iterate a possible solution combining AI and upskilling into mid fidelity

User Group

Upperclassmen Students

Early-Career Professionals

Approach

Our approach was a deep, multi-phase investigation. Starting off with research to build a comprehensive understanding from the ground up. We would take our key research insights into ideation where we would develop sketches and wireframes for concept testing. Our results guided our mid-fidelity prototypes that were used in usability tests. Our final deliverables included our research insights, prototypes, and the results from our usability testing.

Phase One: Primary and Secondary Research

This phase aimed to establish a foundational understanding of AI in the context of upskilling. We conducted secondary research, a comparative analysis of existing platforms, user surveys, and 12 interviews.

Phase Two: Ideation and Sketching

By focusing on hands-on simulators and personalized roadmaps, our features help users build tangible skills and see their progress clearly, directly addressing the desire for real-world application.

Phase Three: Prototyping and Testing

The Feedback Dashboard and Job Simulator provide the clear guidance users are missing, while still allowing them the autonomy to choose which skills to focus on and how to learn them.

Phase One: Primary and Secondary Research

This phase aimed to establish a foundational understanding of AI in the context of upskilling. We conducted secondary research, a comparative analysis of existing platforms, user surveys, and 12 interviews.

Guiding Questions

What are the core user needs and pain points in upskilling?

How do AI tools currently support or hinder progress toward user goals?

Goals

Develop a comprehensive understanding of AI and Upskilling tools

Analyze existing solutions in the market

 Identify user motivations and pain points in the upskilling process

Secondary Research

This research sprint aims to explore broader trends, opportunities, and challenges in upskilling and AI. It allows for a comprehensive understanding of the external landscape, which is crucial for strategic planning.

Activities in this sprint include:

  • Literature reviews

  • Conducting a comparative analysis

  • Conducting a SWOT analysis

Literature Review

We split our team into two to look at two major topics: Artificial Intelligence and Upskilling. We looked at both academic articles and user opinions on sites like Reddit to gather a wide swath of data.

Comparative Analysis

We needed insight into the tools that are currently available with a comparative analysis. We researched existing platforms that provide upskilling resources with features that include artificial intelligence.

Specialized Learning
Learning Systems
Skill Development
Career Coaching
SWOT Analysis

Our SWOT is structured around the same four key segments which are strengths, weaknesses, opportunities, and threats as the comparative analysis. This analysis helped us refine further and evaluate the effectiveness of existing AI-driven upskilling solutions, pinpoint gaps, and explore areas for innovation and improvement.

Primary Research

This sprint utilizes both interviews and a survey. The interviews will allow us to collect deep qualitative data about our user groups, and the survey will allow us to collect quantitative data about our most available user group.

Survey

We did not get a representative sample from our target population we would need to make claims on how our user group feels. Instead, we took these patterns and explored them more with secondary research:

  • Preferred upskilling methods include watching YouTube videos, taking online courses, and more practice through projects. 

  • 50% of students who upskill spend 1-2 hours a week and 50% spend 3-5 hours, which shows their willingness to invest their time in upskilling. 

  • 69.2% of respondents use AI tools like ChatGPT daily and are open to using AI for learning. However, people still prefer human guidance for career-related learning. 

We successfully collected contacts for additional interviews with an incentive. The results of these interviews can be found below.

Interviews

We conducted 12 interviews with people from all user groups to understand the role and perception of AI and key pain points in upskilling. We would interview:

  • 3 upperclassmen

  • 3 early career professionals

  • 3 experienced professionals

  • 3 hiring managers

Key Research Insights

Across all groups, a clear narrative emerged. While AI is seen as a useful tool for efficiency, it is not yet trusted as a teacher.

Human Interaction is Essential: The most valuable learning experiences involve mentorship, peer collaboration, and direct feedback from managers. AI was not seen as a replacement for this.

Low Trust in AI for Critical Tasks: Users were comfortable using AI for basic, repetitive tasks (summarizing, ideating) but were highly skeptical of its ability to provide nuanced feedback, evaluate complex work, or teach unfamiliar topics.

Lack of Structure is a Major Barrier: Both students and professionals struggle with knowing what to learn next. They desire clear, personalized roadmaps but find current AI recommendations too generic.

Soft Skills are Underdeveloped: A significant gap exists in developing crucial soft skills like communication and leadership through digital tools.

Practical Experience > Certifications: Hiring managers and professionals alike value hands-on, real-world project experience far more than certificates from online courses.

Phase Two: Opportunity Exploration and Ideation

Our research revealed that a single "one-size-fits-all" platform would be ineffective. Instead, we identified key opportunity areas and designed a suite of targeted, AI-powered features that could be integrated into a cohesive upskilling ecosystem. The goal was not to replace human interaction, but to augment it.

We would perform extensive ideation and sketching, including a Crazy 8s workshop.

Guiding Question

How can we use our synthesized research to define our opportunities and ideate on potential solutions?

Goals

Define opportunities we identified through both primary and secondary research.

Ideate potential solutions to these opportunities.

Sketch initial low-fidelity solutions that addressed our gaps.

Identifying Gaps

From our research findings, we identified and synthesized primary gaps and opportunities for ideation:

Not enough human interaction and constant feedback

Diverse learning styles (structured vs unstructured)

Building users’ trust in AI (using AI in a more dependable role)

Encourage the learning of soft skills

Efficient knowledge building

Need for experience relevant to real-world projects

Initial Ideation

From our research findings, we identified and synthesized primary gaps and opportunities for ideation:

Crazy 8's Workshop

Each member of the team took one gap that we found earlier in our research synthesis and drew one sketch per minute for 8 minutes. We ended with 64 sketches and ideas to talk about. After sharing, we put our sketches on our FigJam and voted via star stickers on the ideas/sketches that we determined the best.

Through this, we identified three essential features where AI can play a key role in supporting users on their upskilling journeys:

Human Interaction:

Helps users stay connected with peers and build a sense of motivation and accountability throughout their learning process.

Personalization:

Assists users in better understanding their individual needs and clarifies what steps they need to take to make progress in their upskilling journey.

Credibility:

Builds trust by ensuring users feel secure about how their data is handled and confident in the quality and reliability of the resources they use.

Phase Three: User Feedback and Iteration

Taking our low-fi sketches, we looked for feedback through value proposition and usability testing. We iterated our sketches into wireframes and to mid-fidelity prototypes.

Guiding Question

How can we use our previous insights and ideation to create and verify AI in upskilling solutions?

Goals

Gain feedback and insights on sketches through user testing.

Test, refine, and further iterate sketches.

Value Proposition and User Testing

We took our best sketches and tested them with both of our user groups to gather insights into what they prefer and what they would use.

Final Solutions

We chose the highest scores of low-fidelity sketches from our user testing. We iterated upon them and developed them into mid-fidelity prototypes that communicate our solutions:

This feature addresses the need for practical experience. It assigns users realistic, task-based projects that mirror real-world workflows. Instead of just theory, users "learn by doing" in a simulated professional environment, building a portfolio of tangible work.

AI Job Simulator

This feature addresses the need for practical experience. It assigns users realistic, task-based projects that mirror real-world workflows. Instead of just theory, users "learn by doing" in a simulated professional environment, building a portfolio of tangible work.

AI Job Simulator

This feature addresses the need for practical experience. It assigns users realistic, task-based projects that mirror real-world workflows. Instead of just theory, users "learn by doing" in a simulated professional environment, building a portfolio of tangible work.

AI Job Simulator

To solve the problem of unstructured growth, this tool would integrate with performance reviews. It uses AI to analyze and summarize feedback from managers and peers, identifying strengths and weaknesses. It then generates a personalized "roadmap" with links to specific internal company resources, giving employees a clear, actionable path for improvement.

AI Feedback Dashboard

To solve the problem of unstructured growth, this tool would integrate with performance reviews. It uses AI to analyze and summarize feedback from managers and peers, identifying strengths and weaknesses. It then generates a personalized "roadmap" with links to specific internal company resources, giving employees a clear, actionable path for improvement.

AI Feedback Dashboard

To solve the problem of unstructured growth, this tool would integrate with performance reviews. It uses AI to analyze and summarize feedback from managers and peers, identifying strengths and weaknesses. It then generates a personalized "roadmap" with links to specific internal company resources, giving employees a clear, actionable path for improvement.

AI Feedback Dashboard

This tool focuses directly on soft skill development. Users can practice for interviews and receive real-time, AI-driven feedback on metrics like tone of voice, vocabulary, and response clarity, helping them build confidence and job-readiness.

AI Mock Interviewer

This tool focuses directly on soft skill development. Users can practice for interviews and receive real-time, AI-driven feedback on metrics like tone of voice, vocabulary, and response clarity, helping them build confidence and job-readiness.

AI Mock Interviewer

This tool focuses directly on soft skill development. Users can practice for interviews and receive real-time, AI-driven feedback on metrics like tone of voice, vocabulary, and response clarity, helping them build confidence and job-readiness.

AI Mock Interviewer
Design Rationale
Building Trust

`We position AI as a supportive assistant rather than an authoritative judge. It analyzes data, suggests paths, and provides practice, but the critical feedback and decision-making loops still involve humans (managers, mentors, peers).

Fostering Competence

By focusing on hands-on simulators and personalized roadmaps, our features help users build tangible skills and see their progress clearly, directly addressing the desire for real-world application.

Shopper Experience

The Feedback Dashboard and Job Simulator provide the clear guidance users are missing, while still allowing them the autonomy to choose which skills to focus on and how to learn them.

Limitations

Potential for AI Bias: Our participants were largely familiar with AI tools. This may have skewed our findings, as their perceptions might not reflect those of professionals who are less experienced or more skeptical of AI.

Limited Industry Input: As a student-led project, our access to a broad spectrum of industry professionals was limited. Deeper collaboration with more companies would be necessary to fully understand specific corporate upskilling needs.

Next Steps

Refine and Integrate Prototypes:
Develop the proposed features into a single, high-fidelity, and functional prototype to test the cohesive user journey.

Conduct Broader User Testing: Validate the solutions with a larger, more diverse user pool to ensure the features are effective and trusted across different industries and roles.

Explore AI Ethics and Feasibility:
Conduct a deeper investigation into the technical and ethical implications of implementing these AI models, ensuring fairness, transparency, and data privacy.

Select this text to see the highlight effect