Visual research mapping for literature discovery and synthesis
Research Rabbit is a visual literature-discovery and collection tool that helps researchers build interactive maps of papers and citation networks; it’s ideal for academics, PhD students, and R&D teams who need exploratory literature discovery and citation tracking, and it offers a usable free tier with paid plans for heavier library sizes and team collaboration.
Research Rabbit is a research & learning tool that visualizes papers, authors, and citations to help users discover related literature. It creates interactive graph maps and curated collections so researchers can explore citation trails and topic clusters. The platform’s key differentiator is its network-style discovery interface that surfaces related work through citation and co-authorship connections rather than keyword-only search. Research Rabbit serves graduate students, academic researchers, and corporate R&D teams seeking exploratory literature review. A free tier is available for basic libraries, with paid plans for larger libraries and team features.
Research Rabbit is a literature-discovery and visualization platform launched to help researchers move beyond linear search lists. Founded to address the exploration phase of literature review, it positions itself between reference managers and discovery engines by focusing on visual, network-driven discovery. Instead of presenting hits as a ranked list, Research Rabbit builds interactive graphs showing papers, authors, citations, and topical clusters so users can follow citation trails and spot influential works or emerging subtopics. Its core value proposition is making discovery serendipitous and visible, enabling users to quickly expand a seed set of papers into a broader map of related literature.
Key features include interactive graph maps that display papers, authors, and citation links — you can expand nodes to reveal citing or cited works and visually traverse networks. The Collections feature lets users build and share curated libraries; collections contain metadata, PDF links when available, and can be exported as BibTeX. Research Rabbit also offers an Alerts/Updates feed for new papers related to a collection or an author, surfacing additions over time. The platform integrates search by DOI, title, or author and pulls bibliographic metadata and citation relationships from CrossRef and Semantic Scholar indexing (and shows links to full text where available). The UI supports side-by-side list and graph views so you can switch between visual discovery and sortable lists.
Pricing includes a free tier that lets users create personal collections and explore graphs but has limits on library size and collaboration. As of 2026, Research Rabbit’s base free plan permits a modest number of saved items and collection exports; paid subscriptions remove library limits, enable team workspaces, and add priority support. Paid pricing runs per user per month for Pro/Individual tiers and higher for Teams or Enterprise with centralized billing and admin controls. The commercial plans unlock larger library sizes, team sharing, and admin features; Research Rabbit also offers custom enterprise pricing for institution-wide deployments and single sign-on (SSO). Exact monthly prices and seat discounts vary and are listed on Research Rabbit’s pricing page for the most current numbers.
Researchers, graduate students, and research managers use Research Rabbit for literature discovery, gap analysis, and onboarding new topics. For example, a PhD student uses Research Rabbit to expand a seed set of 10 seminal papers into a 200-paper literature map to prepare a review chapter. A corporate R&D scientist uses it to monitor citations and new publications in a technology area to inform competitive intelligence. The product is often compared with semantic-search tools and reference managers like Connected Papers and Zotero; Research Rabbit’s strength is the network map and collection sharing, while rivals may offer deeper PDF management or full-text search indexing.
Three capabilities that set Research Rabbit apart from its nearest competitors.
Current tiers and what you get at each price point. Verified against the vendor's pricing page.
| Plan | Price | What you get | Best for |
|---|---|---|---|
| Free | Free | Limited saved items, basic collections, single-user only | Casual researchers and students exploring topics |
| Individual / Pro | Exact monthly price on site | Higher library size, unlimited collections, priority support | Active researchers who need larger libraries |
| Team | Exact monthly price on site | Shared workspaces, team libraries, admin controls | Small labs and research teams |
| Enterprise | Custom | SSO, centralized billing, custom limits and support | Institutions needing large-scale deployment |
Copy these into Research Rabbit as-is. Each targets a different high-value workflow.
You are the Research Rabbit assistant. Task: starting from exactly five seed papers I will paste as DOIs or full citations in <SEED_PAPERS>, expand to a curated 50-paper discovery map using citation and co-authorship links only (no keyword bias). Constraints: include up to 2 citation hops, prioritize review papers and highly cited foundational works, avoid unrelated tangents. Output format: numbered list of 50 entries with fields: Title; Authors; Year; Citation distance (1 or 2); One-line justification for inclusion. Example entry: 1) Title; Authors; 2012; distance 1; 'Foundational review linking methods A and B.'
You are Research Rabbit helping a PhD student build a 12-week reading schedule from a collection I will paste as up to 30 paper IDs or the collection link <COLLECTION_ID>. Constraints: each week 3-4 papers, total 12 weeks, balanced mix of theory, methods, and recent empirical work, include one actionable learning goal and estimated reading time per week. Output format: week number; theme; 3-4 paper titles with IDs; learning goal (1 sentence); estimated hours. Example: Week 1; Introduction to X; Paper A (ID), Paper B (ID), Paper C (ID); Goal: understand core assumptions; 6 hours.
You are Research Rabbit configured for an R&D scientist tracking a technology area specified as <TOPIC>. Produce a monitoring workflow with saved search queries, alert keywords, recommended filters (venues, years, authors), and an automated triage rubric. Constraints: provide 3 saved queries, 5 high-value alert terms, filters for source types, and a 3-tier priority rubric with scoring rules. Output format: JSON with keys saved_queries (list), alert_terms (list), filters (object), triage_rubric (array of tier objects with score thresholds). Example triage tier: {name: 'High', score_range: '8-10', action: 'Immediate read and add to team library'}.
You are Research Rabbit acting as a research manager assistant. Using topic description <TOPIC>, build a shared collection of exactly 40 papers grouped into 6 thematic clusters and assign each cluster to one of five team members with roles I will provide as a list <TEAM_ROLES>. Constraints: include at least 6 methodological or benchmark papers pinned, tag each paper with theme, priority (high/medium/low), and one-sentence rationale. Output format: CSV rows with columns: Theme, Paper Title, Authors, Year, Tags, Priority, Assigned Team Member, One-line Rationale. Example row: Optimization, Title A, Smith et al., 2019, tags: benchmark;optimizer, High, Alice, 'Standard benchmark for X'.
You are a senior domain expert using Research Rabbit to produce an authoritative map of the intellectual lineage for theory X given seed paper(s) <SEED_PAPERS>. Multi-step: 1) produce a chronological timeline of the 10 most influential papers with one-sentence impact notes; 2) extract 3 citation chains (root to modern) each as a list of titles; 3) identify 5 specific methodological or empirical gaps with evidence links; 4) propose 5 precise research questions that would address these gaps; 5) recommend 3 target journals or conferences. Output format: numbered sections for timeline, chains, gaps, research questions, target venues. Example timeline item: 1998 - Title: 'Introduced concept Y' - impact: 'Established theoretical foundation for Z.'
You are Research Rabbit acting as a literature review writer. Input is a library or collection link <LIBRARY_ID> of 20-50 papers. Task: produce (A) ten annotated entries each with full citation and a two-sentence annotation highlighting findings and limitations, and (B) an 800-word synthesized related-work draft that weaves those ten into coherent themes, with inline parenthetical citations. Constraints: annotations must be neutral and concise; the synthesis must identify three thematic threads and conclude with two open research directions. Output format: Part A: numbered annotations; Part B: 800-word narrative. Example annotation: 1) Smith et al. 2016. Two-sentence note: 'Shows X using method A; limits include small N and lack of longitudinal evaluation.'
Choose Research Rabbit over Connected Papers if you prioritize shareable collections and continuous alerts tied to collections rather than static visual snapshots.
Head-to-head comparisons between Research Rabbit and top alternatives: