Jenius Bank

Building a Research Function inside a Digital Bank

SMBC, a global financial institution headquartered in Tokyo, launched Jenius Bank as its strategic entry into U.S. digital consumer banking.

The initiative aimed to build a modern, full-service digital bank combining traditional financial products with integrated financial-wellness tools. Unlike many fintech startups, Jenius was being built inside a large regulated enterprise, requiring the organization to balance startup-like speed with the operational rigor of a global bank.

To support this ambition, Jenius invested heavily in in-house product, engineering, and UX capabilities to design and deliver a next-generation digital banking platform.

Within the first three years, the platform successfully scaled to:

  • 250,000 active customers

  • $2B in deposits and $2B in loan balances

  • 50k transfers, 35k payments, and 7k mobile deposits

My Role

I joined Jenius in its early formation as Director of Research and Innovation, responsible for establishing the organization’s research capability and embedding customer insight into product, brand, and go-to-market strategy:

  • defining the bank’s ideal customer profile and product differentiation

  • supporting go-to-market strategy and brand positioning

  • guiding product strategy across savings, lending, and future credit offerings

  • building and scaling the UX research function from the ground up

Organizational Scope

The research function operated in close partnership with leadership across CX, design, and product.

CX Leadership Team:

Drew Hopkins — Director, Research & Innovation
Tracey Dunlap — Head of CX
Dean Valentin — Design Director

Research Team:

Sarah Harris — Staff UX Researcher
Hannah Elbigarah — Senior UX Researcher
Danica Calderon — Senior UX Researcher
John Miramontes — UX Researcher

Tools

Userzoom, Qualtrics, SurveyMonkey, EnjoyHQ, Figma, Confluence, Miro, Fintech Insights, Corporate Insight, Comperemedia

 
 

Research Foundation

As Jenius Bank scaled, demand for research began emerging across product, marketing, brand, and design teams. But in early stages the organization lacked understanding of how research could operate and how teams could best engage with it.

Many requests were solution-driven and and late-stage——opportunities for discovery were being missed and timeline expectations were not feasible. Without a clear operating model, research risked becoming reactive, fragmented, and difficult to scale.

Strategy and Education

I introduced a foundational research frameworks through several demos and forums to help stakeholders understand how research supports the product lifecycle—explaining the sequence of discovery through iteration and validation.

This clarified when research should occur, what types of questions it could answer, and how it fit within product and design processes. It also explained how research could be adaptive to confidence and rigor relative to stakeholder needs.

Research Playbook

Next, I developed a standardized research playbook covering:

  • research methods, use cases, and trade-offs

  • typical timelines per method with consideration for planning and synthesis

  • recruitment strategies and statistical significance

  • touchpoint expectations for collaborative scoping, planning, and reporting

This enabled our team rally for processes that were transparent and repeatable across business functions. The business evovled through consistent engagement with us and easy reference to our playbook.

Standardized Intake

As research demand increased, I introduced a centralized intake process (Jira) for stakeholders and scoping guidelines for researchers that proactively required teams to articulate:

  • business objectives and context

  • problem-space with research questions

  • timeline constraints

  • decision points the research would inform

This created consistent research engagement and helped transition requests from informal conversations to structured, decision-oriented research planning.

 

Impact

Establishing a clear operating model helped transition research from an emerging function into a trusted organizational capability.

  • Appetite increased across product, marketing, and brand teams.

  • Stakeholders involved research earlier in product development cycles.

  • Requests became more clearly scoped and aligned to business decisions.

  • Leadership gained broader visibility, saw value, and invested in top-tier research toolset

By clarifying how research operates and how teams should engage it, the organization moved from ad hoc research requests to a more structured insight-driven workflow.

 
 

Insight Infrastructure

I knew from previous roles that as research expands across the organization insights become increasingly difficult to discover and reuse. When findings are only stored and shared within immediate project teams, insights become scattered. So, I strategically created scalable knowledge infrastructure to prevent intel silos later on.

Centralized Research Repository

I implemented a dedicated repository in EnjoyHQ where studies, artifacts and reports could be stored in a structured and searchable format. This created a single source of truth for research across the company.

Taxonomy and Tagging Systems

To make insights discoverable, I designed a framework that organized studies with consistent titling, a primary category (product or marketing domain), and several secondary attributes.

I unifed the titling convention and categories across other systems where studies had dependencies (UserZoom, SharePoint, and Jira) making research intuitive to navigate cross-platform.

I initially experimented with tagging individual insights, but it proved too heavy for the organization’s pace. I shifted to a lighter model that supplemented tagging with secondary attribute categories. This offered multiple entry points depending on how teams were approaching a problem:

  • business or insight theme

  • research method

  • device type

  • participant source

  • primary researcher

  • date created

This structure balanced searchability with operational speed allowing the repository to scale.

Standardized Outputs

To ensure consistency across studies, I curated templates for research plans and reports that were specific to methodology.

For example, an interview planning template included the following sections to bootstrap research preparation:

 

Impact

Teams explored research prior to initiating new studies and reducing reliance on the research team, removing us as bottleneck to learning and discovery.

As I continuously sought my team’s feedback for improvements on categorization and templating, our team also gained a shared language for organizing data at a high-level— improving our efficiency while preserving our creativity in analysis and reporting.

 
 
 
 

Prioritization and Roadmap

As Jenius launched its products and digital platforms, demand for research grew rapidly and requests arrived through a variety of informal channels—messages, email, meetings, and ad hoc conversations.

This created several challenges: redundant asks across multiple teams, limited visibility into our research pipeline, and difficulty aligning studies with the total business impact

We needed a more systematic approach to governing demand and capacity.

Executive Guidance

I met with the executive team on a quarterly basis to discuss high-level discoveries and developments. It was essential to align executives with research function as well as to ground research in expectations and KPIs from leadership .

Research Triage for Web / Mobile 2.0

While 1.0 was essentially out-of-the-box, 2.0 presented opportunities to customize and add functionality within the capacity of design and engineering. Brand, design, product, platform, strategy, and engineering leadership all had various perspectives, and were engaging research frantically for guidance!

I led a series of workshops to slow down and unify the conversation around a shared reality:

  1. Grounded teams in existing personas and the business ethos / mission.

  2. Brainstormed research needs in a psychological safe-space. No idea was a bad idea.

  3. Asked teams to (honestly) prioritize needs on a grid of Uncertainty vs Impact.

  4. Synthesized needs with redundancies and identified potential study overlap.

  5. Debriefed teams on the outcomes, negotiated highest impact / uncertainty needs together

  6. Pressure-tested these against design and engineering.

For example, we discovered many needs shared a common anxiety around understanding models for navigation and feature-appetite. We established a conceptual baseline by addressing this holistically through market intel, card sort, and preference testing. We then narrowed and tested design concepts on navigation and actions that had high-preference.

As a result, teams had a clear and cohesive understanding of research momentum and trade-offs; as research needs were de-prioritized it was recognized as the greater good of the business.

Backlog and Steering Committee

I started organizing our intake into shared backlog. I categorized efforts by method, timeline, and vertical for our team so that I could visualize research demand across the organization.

I regrouped with stakeholders across the business in a monthly forum. In the steering committee, I surfaced key insights from recent studies, then discussed upcoming research to de-risk competing priorities—always through the lens of impact versus certainty. In cases of conflict, our team had foresight to plan for reallocation or consolidate some efforts into a single study.

 

Impact

Stakeholders gained collaboration and concensus on business strategy, research was based on impact rather than urgency and moved upstream in product planning, and researchers improved their focus and efficiency with less disruption from stakeholders.

This enabled centralize guidance, optimize requests, build confidence and transparency with partner teams, and envision a 3-6 month roadmap.

 
 

AI-Empowerment

As generative AI tools were rapidly evolving, I saw an opportunity to improve our efficiency and insight generation.

However, the limitations of these tools were still unclear. Teams across the org were experimenting independently clear guidance on where AI was useful versus where human expertise remained essential.

Team Touchpoints and Workshops

Rather than fragmented experimentation, I provided structured exploration of AI within the team. Every month we met up to discuss or informally compare notes.

Research Workflow Mindmap

Each researcher categorizes their typical efforts, breaks them down into fundamental processes, then evaluates each process in terms of AI potential.  This broadened the teams thinking about potential AI use cases and helped us discover new applications.

 

Gains and Pains Reto

Each researcher listed different applications of AI that they attempted over the past month, then reflected on its usefulness and efficiency.  We shared prompts for especially meaningful scenarios.  This helped us learn from each other; we could avoid repeating poor AI applications while sharing meaningful one.

Competitive Gathering Acceleration

Keeping current on competitive intel was critical as a new institution and I took steps early on to equip teams with strong visibility into the fintech landscape.

  • a market intelligence database focused on deposits and digital banking

  • a UX journey database for analyzing competitor experience

  • monthly updates on key competitor developments and innovative features

  • quarterly updates on competitive metric positioning

  • a standardized Miro workspace for ad hoc analysis of competitor UX flows

Despite these investments, demand for competitive analysis continued to grow as strategy teams explored new product differentiators (loan top-ups, origination fee structures, etc)!

I experimented with transitioning parts of our competitive intelligence workflow to AI-assisted analysis.  I developed a standardized set of prompts that allowed AI tools to generate initial metrics, graphs, and summaries.  

With stakeholder approval, the team adopted an AI-first workflow for both recurring and new competitive intel tasks.  As the system proved reliable, the team then shifted from drafting reports manually with AI assistance to AI-generated reports followed by human fact-checking and refinement.

This change significantly accelerated the process.  What previously required a week of researcher effort could now be completed in about a day, freeing the team to focus on higher-value research work.

Experimentation to Integration

Through workshopping ideas, experimenting with new standards, and trial-and-error our team saw a gradual adoption of AI into our daily workflows.


 
 

Lessons Learned

Research functions need legitimacy and education before they can scale.  Early on, the biggest challenge was not capacity — it was organizational clarity.

A heavy tagging system can actually hinder insight discoverability.  Better to keep early repository infrastructure lean and intuitive instead of comprehensive.

Don’t allow the team to become a ticketing system.  Introducing intake and prioritization systems aligned research with highest-impact rather than urgency or pressure.

AI is a workflow augmentor, not a replacement.  AI works best when it accelerates preparation and summarization, and aggregation; it struggles with interpretation and judgment.

 
Next
Next

FIS