A comprehensive framework for measuring impact and driving continuous improvement in AI-era capability building
Framework Overview
This framework outlines how Big Lever Institute assesses the impact of its programs and monitors participant, partner, and community outcomes. It is designed for both internal improvement and public transparency, providing a rigorous, data-driven narrative for our unique AI-era capability-building programs.
Our approach combines quantitative metrics with qualitative insights to create a comprehensive understanding of program effectiveness and participant success across multiple dimensions and timeframes.
Theory of Change
The Challenge
A once-in-a-millennium technology shift has outpaced traditional training, creating dramatic gaps in skills, evidence, and access.
Our Response
Studio-style, hands-on cohorts that let more people build capability, produce visible portfolios, and share open methods.
The Impact
Expanding the pipeline of AI-ready founders and operators equipped for the intelligent systems era.
Outcome Tiers and Timeline
1
Immediate Outcomes
0-30 days post-program
Program completion and portfolio creation.
2
Short-term Outcomes
30-90 days post-program
Skills put to use and first traction in work or ventures.
3
Intermediate Outcomes
3-12 months
New employment, venture development, and skill application in real projects.
4
Long-term Impact
12+ months
Career advancement, venture sustainability, and contributions to the broader ecosystem.
Core Metrics
Participation & Engagement
Applications received, acceptance, and completion rates
Demographic and geographic diversity of participants
Learning & Skill Development
Skill assessments: before, during, and after the program
Demonstrated proficiency: AI tools, project management, prompt engineering
Portfolio artifacts: quality, documentation, and creativity
Real-world use: application to work, contributions to open projects, or teaching others
Career & Venture Outcomes
Operator Track: Interviews and job offers within 3-6 months, job relevance to AI, and retention.
Founder Track: New concepts or prototypes built, users/customers acquired, any revenue, grant, or investment, venture continuity.
Both Tracks: Promotions, expanded responsibilities, salary increases, and expanded professional networks.
Program Effectiveness
Participant satisfaction scores (Net Promoter Score, or NPS)
Quality of instruction, mentor value, and platform usability
Alumni engagement, ongoing community, and peer-to-peer support
Organizational & Sector Impact
Open resources created: prompts, SOPs, playbooks, and downloads
Partner organization satisfaction and renewal rate
External adoption of resources, references in academic or industry work
Data Collection Methods
Quantitative
Automated tracking (attendance, portfolio submissions), surveys, and admin data.
Qualitative
Interviews, focus groups, case studies, open feedback, and external partner input.
Validation
Employer and partner surveys, independent evaluation when possible.
Success Benchmarks
85-90%
Cohort Completion
Target completion rate for all program participants
90%
Portfolio Submission
Participants completing final portfolio requirements
60-75%
Employment/Venture Traction
Success rate within 6 months post-program
50+
NPS Baseline
Participant satisfaction score target
Open Resources Published
2–3 per cohort targeting broader community impact
Resource Adoption
10+ external organizations per year utilizing our published resources
Benchmarks are reviewed and updated annually based on real results.
Reporting & Transparency
01
Monthly
Participation and engagement metrics tracking
02
Quarterly
Cohort outcomes summary and analysis
03
Annually
Full impact report and public brief publication
Select metrics are published via our website and impact dashboards, ensuring public accountability and transparency in our mission.
Partners receive customized reports aligned to their own KPIs, fostering collaborative improvement and shared success metrics.
Continuous Improvement
Measure Outcomes
Comprehensive data collection across all program dimensions
Drive Improvement
Curriculum and operations enhancement in every cohort
Gather Feedback
Regular staff, stakeholder, alumni and partner input
Refine Framework
Ongoing evaluation and framework optimization
Outcomes are not only measured, but also drive curriculum and operations improvement in every cohort. The framework is refined through regular staff and stakeholder review, alumni and partner feedback, and outside evaluation.
Privacy & Ethics
Explicit Consent
All outcome data is collected with clear, informed participant consent
Data Protection
Information is anonymized where required and stored with enterprise-grade security
Purpose Limitation
Data used solely for educational, improvement, transparency, and research purposes
Participant Control
Individuals maintain the right to control the use of their personal data
Our commitment to ethical data practices ensures participant trust while enabling impact measurement and program improvement.
Building the Future
This outcomes framework ensures that Big Lever Institute's impact—at every level—can be measured and improved, while building a growing, equitable pipeline for the age of intelligent systems
Measurable
Rigorous data collection and analysis across all program dimensions
Reportable
Transparent communication of outcomes to all stakeholders
Improvable
Continuous enhancement driven by evidence and feedback
Equitable
All programming and resources are published openly and are available for free