JPMorgan Chase is integrating artificial intelligence tool usage into the official performance reviews for its software engineers, with new goals set to appear in employee targets by the end of March.
JPMorgan Chase now requires its 65,000-member Global Technology team to master and demonstrably apply AI assistants like GitHub Copilot. This directive formalizes the expectation that generative AI is not merely an optional productivity aid but a mandatory component of job performance, directly linking AI adoption rates to individual career progression. This move codifies a new standard for engineering excellence and is likely to set a precedent across the financial services industry.
What We Know So Far
- JPMorgan Chase is making artificial intelligence skills a mandatory requirement for its software engineers, directly tying the use of AI tools to their performance ratings.
- According to reporting from The Economic Times, new AI-focused objectives will officially appear in employee performance targets by the end of March.
- The policy will affect most developers within the bank's 65,000-member Global Technology team by the end of March 2026, as confirmed by NewsBytes.
- Managers at the bank are already tracking how frequently their technology teams use AI tools, with internal dashboards reportedly monitoring the installation and usage of assistants like GitHub Copilot.
JPMorgan Chase AI Performance Review Integration
JPMorgan Chase engineers are now required to "drive excellence" by leveraging AI, making it a core component of their evaluation. This new performance mandate, detailed in internal communications, represents one of the most structured corporate efforts to embed AI into the software development lifecycle, designed to produce measurable improvements in code quality, development velocity, and overall productivity.
The initiative encompasses the majority of JPMorgan Chase's 65,000-person Global Technology organization, a division backed by a planned technology spend of approximately $20 billion in 2026. By tying performance reviews to AI adoption, the financial giant is using its human resources framework to accelerate its return on this massive technological investment, ensuring its workforce is not only equipped with but also proficient in using next-generation development tools.
JPMorgan is preparing to test Anthropic's Claude Code, with a potential start in April, according to NewsBytes. This suggests a broader, multi-platform approach beyond GitHub Copilot, aimed at identifying the most effective AI assistants for various coding tasks and preventing dependency on a single vendor. Developers at the bank already have access to models from both OpenAI and Anthropic, and this new policy will now formally measure how they are put to use.
The Growing Trend of AI-Driven Productivity Metrics
JPMorgan Chase's decision is a high-profile example of a burgeoning trend: technology-forward enterprises are creating systems to measure and enforce AI adoption as they invest heavily in the technology. The goal is to ensure these powerful new tools translate into tangible business value. This shift is particularly evident in software engineering, where the output can be more easily quantified.
Meta, for instance, is pursuing a similar path as part of its broader strategy to become an "AI-native company." According to a report from Blockchain Council, the company has set aggressive internal benchmarks for AI-assisted development. One goal reportedly targets 65% of engineers in its Creation organization to produce over 75% of their committed code using AI tools by the first half of 2026. Another aims for 55% of all software engineering code changes to be "Agent-Assisted" by the fourth quarter of 2025.
A Meta spokesperson reportedly clarified that performance rewards are ultimately based on the impact of an engineer's work, not on raw AI usage statistics. This clarification highlights a critical discussion point for all companies implementing AI adoption policies: How do you differentiate between meaningful, productivity-enhancing AI use and superficial adoption designed merely to satisfy a metric? Is tracking tool usage a reliable proxy for innovation, or could it inadvertently incentivize quantity over quality?
Microsoft CEO Satya Nadella stated in May 2025 that approximately 30% of code at his company was already being generated by AI. This data point underscores the rapid, industry-wide integration of AI into core development workflows at the world's largest software companies. JPMorgan's move to formalize this in performance reviews simply makes the implicit expectation explicit.
How AI is Transforming Engineer Performance Evaluations
The integration of AI usage into performance management signals a fundamental transformation in how engineering contributions are measured. For decades, organizations have struggled with imperfect metrics—like lines of code, number of commits, or features shipped—that often fail to capture the true value of an engineer's work, such as code quality, mentorship, or complex problem-solving. The introduction of AI adoption as a key performance indicator (KPI) adds a new, technology-centric layer to this complex evaluation process.
The underlying rationale for this push is a pursuit of hyper-efficiency. The corporate hypothesis is that AI coding assistants can automate routine and boilerplate tasks, freeing up highly skilled, and highly paid, senior engineers to focus on system architecture, innovation, and other high-impact challenges. By mandating and measuring the use of these tools, companies aim to accelerate this transition and maximize the productivity of their most valuable technical talent.
If you are an engineer or a technology manager, your performance conversations are about to change. The focus may shift from "what did you build?" to "how did you build it, and how efficiently?" You may be asked to demonstrate how you have leveraged AI to reduce development time, improve test coverage, or refactor legacy code. This requires a new skill set, one that emphasizes effective prompt engineering and the critical evaluation of AI-generated code, rather than just the act of writing code from scratch.
This trend also serves as a preparatory step for the next wave of automation. The current generation of tools function primarily as co-pilots or assistants. However, the industry is moving rapidly toward more autonomous systems, often referred to as agentic AI, which can handle complex, multi-step tasks with minimal human intervention. By training the workforce to collaborate with AI assistants now, organizations are building the cultural and technical foundation needed to integrate more powerful autonomous agents in the future.
What Happens Next
The most immediate milestone is the end of March, when these new AI-centric goals are scheduled to be formally integrated into performance targets at JPMorgan Chase. The technology and financial sectors will be watching closely to see how the policy is implemented and how the bank's vast engineering workforce responds to this new, explicit directive.
Several critical questions remain unanswered. How, precisely, will the bank measure the *impact* of AI use beyond simple usage statistics? Will it develop proprietary metrics to correlate AI adoption with improvements in code quality, a reduction in production bugs, or faster project completion times? The answers will be instructive for other enterprises contemplating similar policies.
Looking forward, the success or failure of this initiative could have a ripple effect across industries. If JPMorgan can demonstrate a clear, quantifiable link between its new performance standards and improved business outcomes, it is highly likely that other major corporations, particularly in regulated fields like finance and healthcare, will adopt similar frameworks. The benchmarks being set by technology leaders like Meta and Microsoft, combined with the formal HR integration at JPMorgan, are collectively forging a new consensus: proficiency with AI is no longer a forward-looking skill but a present-day requirement for the modern software engineer.










