Evidence-based research assistant for faster literature answers
Consensus is an evidence-based research assistant that finds and summarizes peer-reviewed findings and public research to answer natural-language questions. It’s best for researchers, product managers, and clinicians who need rapid, sourced summaries rather than raw papers. The pricing includes a usable free tier and paid plans for teams, making it accessible for individuals while offering scaled features for professional workflows.
Consensus is an AI-powered research and learning tool that finds, reads, and summarizes scientific literature to answer natural-language questions. It aggregates peer-reviewed papers, preprints, and authoritative sources, then surfaces concise, evidence-weighted answers with links to original studies. The platform’s key differentiator is automated evidence synthesis — it highlights supporting and opposing papers, shows sample sizes, and cites exact passages. Consensus serves researchers, product teams, healthcare professionals, and students who need quick, sourced answers for decisions or literature reviews. A free tier exists with limits; paid plans add team features and higher query volumes.
Consensus is an AI-driven research and learning application launched to streamline literature discovery and evidence synthesis. Originating from a team focused on improving how people access scientific consensus, the product indexes peer-reviewed journals, preprints, and reputable websites to generate concise answers to user queries. Its core value proposition is saving hours of manual searching by automatically extracting claims, surfacing the highest-quality evidence, and presenting citations and excerpts so users can verify the source quickly. The company positions itself as a bridge between raw research and practical decision-making for non-experts and specialists alike.
The product’s feature set emphasizes three main capabilities. First, the Answer synthesis gives a one-paragraph summary of the consensus on a question and lists supporting and contradicting studies with direct links and highlighted quotes, enabling source verification. Second, the Search by claim feature lets users paste assertions or questions and returns ranked evidence with metadata such as publication date, study size, and evidence strength. Third, the Cite & Export options permit exporting the answer and source list as shareable links or copying citations for reports. Consensus also offers saved searches and team sharing (on paid plans) so groups can maintain research libraries and collaborate on questions and findings.
Consensus pricing includes a free tier and paid options for heavier use. The free plan allows a limited number of queries per month and access to the core answer synthesis and source links (exact query limits are published on their site and may change). Paid subscriptions—listed on Consensus’s pricing page—unlock higher monthly query allowances, team features, saved libraries, and priority support; enterprise/custom pricing is available for large organizations requiring SSO and admin controls. The free tier is suitable for occasional users and students, while the paid tiers target professionals and teams who run frequent evidence searches and need collaboration features and export controls.
Users range from academic researchers doing rapid literature scans to non-academic decision-makers needing evidence support. For example, a clinical researcher uses Consensus to triage whether new interventions have consistent trial results before deeper review. A product manager uses it to summarize market-adjacent academic findings to inform roadmaps and feature prioritization. The platform is often compared to academic search engines and AI literature assistants like Semantic Scholar and Elicit; Consensus distinguishes itself by prioritizing concise, evidence-weighted summaries with direct quotes and citation lists rather than raw paper discovery alone.
Three capabilities that set Consensus apart from its nearest competitors.
Current tiers and what you get at each price point. Verified against the vendor's pricing page.
| Plan | Price | What you get | Best for |
|---|---|---|---|
| Free | Free | Limited monthly queries, answer summaries, basic source links only | Students and casual users testing the product |
| Individual | $19/month | Higher monthly queries, saved searches, export answers | Professionals needing regular evidence answers |
| Team | $59/user/month | Team libraries, collaboration, priority support, higher limits | Small teams doing shared research workflows |
| Enterprise | Custom | SSO, admin controls, custom quotas, SLAs | Large orgs requiring compliance and scale |
Copy these into Consensus as-is. Each targets a different high-value workflow.
Role: You are Consensus, an AI that finds, reads, and synthesizes peer-reviewed literature. Task: Answer a single clinical question I will provide. Constraints: search literature from the past 10 years, prioritize randomized trials and systematic reviews, select the top 5 most relevant studies by relevance and sample size, produce one concise 150-word summary that states the overall finding and clinical implication, and give a simple evidence strength label (Strong / Moderate / Weak). Output format: 1) One-line conclusion, 2) 150-word evidence summary, 3) Evidence strength label, 4) Three citations with PMID or DOI. Example input: "Does daily low-dose aspirin prevent preeclampsia?"
Role: You are Consensus, summarizing scientific literature for academic use. Task: Create a 150-word paragraph summarizing evidence for the question I supply. Constraints: include 3 in-text citations formatted as [AuthorYear PMID/DOI], list the three primary supporting studies below the paragraph with sample sizes and exact quoted sentences (<=20 words) from each paper that support the claim. Only include peer-reviewed clinical or human studies. Output format: 1) 150-word paragraph with three in-text citations, 2) Bullet list of three studies with sample size and <20-word quoted supporting excerpt. Example input: "Caffeine intake and miscarriage risk."
Role: You are Consensus summarizing comparative evidence between two interventions I name. Task: Produce a structured comparison to inform a product roadmap decision. Constraints: 1) Limit to human clinical trials and meta-analyses, 2) report effect size range and median (with 95% CI where available), 3) list number of studies, total N, and top 3 supporting and top 2 opposing papers. Output format: a) 3-sentence executive summary, b) side-by-side bullets for Intervention A vs B (efficacy, safety, typical population), c) table-like bullets: number of studies, total N, median effect (95% CI), d) links to top 5 papers. Example input: "Intervention A: digital CBT app; Intervention B: face-to-face CBT for mild-moderate depression."
Role: You are Consensus, a literature-synthesis assistant for researchers. Task: Map current knowledge and propose next experiments for my topic. Constraints: 1) Provide 3 clearly numbered gaps in evidence with supporting citations, 2) propose 3 feasible follow-up experiments (brief methods, sample size justification, expected measurable outcome), 3) list five highest-impact papers with short rationale for impact. Output format: 1) One-paragraph overview, 2) Numbered gaps with citations, 3) Three proposed experiments as short protocol bullets (sample size and primary endpoint), 4) Top-5 papers with 1-line rationale each. Example input: "Microbiome modulation to reduce chemotherapy-induced mucositis."
Role: You are Consensus conducting high-level triage and heterogeneity analysis across many studies. Task: For a literature corpus on my question (I will paste or describe inclusion criteria), identify study clusters, quantify heterogeneity drivers, and provide an evidence-weighted pooled estimate where possible. Multi-step instructions: 1) List inclusion criteria and screening summary (N studies found, excluded, included); 2) Cluster studies by design/population/intervention and summarize each cluster (median sample size, common endpoints); 3) Identify top 5 sources of heterogeneity with citations and examples; 4) Provide a conservative pooled effect estimate and uncertainty with method described (random-effects) or explain why pooling is invalid. Output format: numbered steps with citations and brief numeric summaries. Few-shot example: show a short mock screening result and cluster output for guidance.
Role: You are Consensus acting as evidence synthesis lead for a regulatory briefing. Task: Produce a concise risk-benefit assessment and recommended regulatory actions for a therapeutic or device. Multi-step constraints: 1) Summarize pivotal efficacy trials and safety signals with exact quoted safety endpoints and sample sizes, 2) produce a 3x4 risk-benefit table (benefit rows, risk rows, columns: magnitude, certainty, key citations), 3) list 4 possible regulatory actions ranked by evidence strength with pros/cons, 4) call out any data gaps that would change the recommendation and what specific studies would resolve them. Output format: executive summary (<=200 words), risk-benefit table (bullet rows), ranked actions with citations. Provide a brief example of how to phrase an action and its evidence basis.
Choose Consensus over Elicit if you prioritize concise, evidence-weighted summaries with direct quoted citations for quick decision-making.
Head-to-head comparisons between Consensus and top alternatives: