Skip to main content
Enterprise AI Analysis: What's In My Human Feedback? Learning Interpretable Descriptions of Preference Data

Enterprise AI Analysis: Understanding Human Feedback

What's In My Human Feedback? Learning Interpretable Descriptions of Preference Data

Authored by: Rajiv Movva, Smitha Milli, Sewon Min, Emma Pierson

Abstract: Human feedback can alter language models in unpredictable and undesirable ways, as practitioners lack a clear understanding of what feedback data encodes. While prior work studies preferences over certain attributes (e.g., length or sycophancy), automatically extracting relevant features without pre-specifying hypotheses remains challenging. We introduce What's In My Human Feedback? (WIMHF), a method to explain feedback data using sparse autoencoders. WIMHF characterizes both (1) the preferences a dataset is capable of measuring and (2) the preferences that the annotators actually express. Across 7 datasets, WIMHF identifies a small number of human-interpretable features that account for the majority of the preference prediction signal achieved by black-box models. These features reveal a wide diversity in what humans prefer, and the role of dataset-level context: for example, users on Reddit prefer informality and jokes, while annotators in HH-RLHF and PRISM disprefer them. WIMHF also surfaces potentially unsafe preferences, such as that LMArena users tend to vote against refusals, often in favor of toxic content. The learned features enable effective data curation: re-labeling the harmful examples in Arena yields large safety gains (+37%) with no cost to general performance. They also allow fine-grained personalization: on the Community Alignment dataset, we learn annotator-specific weights over subjective features that improve preference prediction. WIMHF provides a human-centered analysis method for practitioners to better understand and use preference data.

Executive Impact & Key Metrics

WIMHF delivers quantifiable improvements in AI alignment and understanding.

0% Safety Gains with Data Curation
0% Reward Model Signal Captured
0 Datasets Analyzed
0% Features Match Annotator Explanations

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Calculate Your Potential ROI

See how WIMHF can translate into tangible value for your enterprise.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your WIMHF Implementation Roadmap

A structured approach to integrating human-centered AI analysis into your workflow.

Phase 1: Data Preparation & SAE Training

Curate your preference datasets and train Sparse Autoencoders to discover measurable features. WIMHF works across diverse datasets, from LMArena to HH-RLHF.

Phase 2: Feature Interpretation & Validation

Generate natural language descriptions for features and validate their fidelity. This ensures human interpretability and aligns with expert understanding.

Phase 3: Preference Analysis & Insight Generation

Identify expressed preferences and analyze conflicts across datasets. Pinpoint unsafe annotations and potential reward hacking vulnerabilities.

Phase 4: Actionable Intervention & Personalization

Implement data curation strategies (e.g., relabeling unsafe examples) and enable fine-grained personalization based on subjective feature preferences.

Ready to Transform Your AI Alignment?

Let's discuss how WIMHF can bring clarity and control to your human feedback processes.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking