What’s the Cutting-Edge Future of AI and Spaced Repetition? Anyone Know the Technical Details?

I don’t think this would work well. I.e., “passive learning/review,” shouldn’t count towards active recall practice. Watching a video just means that you need to unsuspend those cards. Speaking for myself, I can watch a video and then not answer a single related card correctly :joy:

That said, perhaps that data could be taken into account after students review related cards. I.e., if a student answers good on a card related to video they just watched, maybe the next interval could b increased slightly.

What would be really cool is to give students Anki “credit,” for doing hard things like qbanks that do have a significant element of active practice involved. E.g., modify intervals for cards related to a given practice question or other activities that would make the user more likely to correctly recall a set of cards.

1 Like

It’s the same thing in a way. I encourage you to read this Just in Math article. This is what has inspired me and I believe Expertium’s ideas too.

As you said, if you’ve watched a video on a topic, it becomes easier for you to review the cards and the idea here is you can represent that through a change in DSR values (memory state). You are right in your suspicions that watching a video wouldn’t be a full review, instead it is what Math Academy calls a FIRe (Fractional Implicit Reviews).

In math academy’s system, if you review a card for division function and pass the card, a FIRe in the form of credit travels to your multiplication cards. If you fail it, a penalty travels in the same way to your multiplication cards.

In a lecture watching scenario, I’m assuming you’ll have to slightly change the value of retrievability and keep others the same. If it’s done through a manual reschedule entry, that means the card automatically gets a new interval. If it’s done in some other hacky way and only the retrievability is changed, this will happen:

As you’re reviewing when the R is still high.

3 Likes

Thanks, I think AnkiHub is ideal for collecting lots of mnemonics and images generated that way.

It would be useful if AI could generate advanced cards in the future, but if that is not possible users will have to create them manually. If so I expect that lots of advanced shared decks like Anking for medical students or Ultimate Geography would be an alternative to AI generated.

1 Like

Wow, this is brilliant! I’m definitely going to read more there.

1 Like

these resources here kind of explain the ideas behind the thinking of the rapid technological progress in a lot of important sectors of society.

very interesting blog post about the intelligence explosion

or if you like podcasts

this part of the video explains a more general pattern of exponential disruption mainly focused on infrastructure (you might consider robotics and AI also as infrastructure investments)

lots of information about the sectors that are currently experiencing rapid transformation

more interesting videos from the same source (Tony Seba)

1 Like

does anybody know more about what supermemo is working on?

Krzysztof Biedalak, CEO from Supermemo has some quite interesting posts about AI and Supermemo on linkedin.

for example see this one:


from this post here:

2 Likes

very interesting. I wanted to learn a bit more about this topic, can you tell me what algorithms you use and maybe do you have an architecture diagriam? I just saw this Understanding similarity or semantic search and vector databases | by Sudhir Yelikar | Mediumblog post here and I am wondering how the architecture and algorithms in such an application look. or do you have good reads or resources to learn that?

Also I wanted to ask if you can then directly create flashcards out of the conversation?

Can you connect yourself different models with your own API keys?

Will it be focused on medical knowledge? Also when will that be released?

thank you!

The idea of modelling interference / card similarity seems an interesting place to research.

I wonder if some clustering algorithm applied to time-series review data could give card similarity scores, feeding into memory / stability updates for similar cards…

1 Like

Thanks for the links. Looks like there’s some interesting food for thought there. I’ll check them out further.

I just asked ChatGPT to brainstorm how an OS could combine systems like Microsoft’s Recall, ScreenPipe (or whatever else is out there) with spaced repetition. Here’s what it came up with—it’s high-level, and obviously, there are better ways to refine it, but this was just a quick prompt to see what ideas might pop up:


Creating an OS specifically designed for seamless learning, knowledge retention, and exploration with the ability to handle documents, planning, voice control, and other advanced features would involve designing an integrated system from the ground up. Let’s approach it holistically, focusing on an architecture that supports deep learning, spaced repetition, and interconnected content management.

Vision: Learning-Driven OS (L-DOS)

The concept is an Operating System tailored to personal learning and productivity. The OS would blend spaced repetition, context-based retrieval, document management, and voice interaction. The ultimate goal is a user-centered experience that adapts to learning needs, personal projects, and exploration.

Core Architecture and Functional Components

1. Content Management & Storage Layer

  • Unified Knowledge Base (UKB):
    • All content (notes, documents, plans) is stored in a central knowledge repository within the OS. The UKB supports hierarchical (folder-based), relational (tags and categories), and linked (cross-references) structures to create connections between topics.
    • Each item in the UKB (documents, notes, projects) is treated as a node, allowing interconnections. For example, a note on “machine learning” could be linked to related projects or research papers.
  • Metadata Indexing:
    • Each content item is associated with metadata (tags, creation date, keywords). This supports fast retrieval and helps with content categorization.

2. Spaced Repetition Engine

  • Personalized Learning Algorithm:
    • An adaptive spaced repetition algorithm tracks user performance and interaction with content. This engine could use a modified SuperMemo or Leitner System, incorporating machine learning to adjust intervals dynamically based on user feedback and quiz performance.
    • Memory states are tracked to offer reminders based on individual retention patterns, prioritizing content based on difficulty, frequency, and familiarity.
  • Integration with Content:
    • Users can “flag” any content—such as a note, document snippet, or project plan—to be added to the spaced repetition schedule.
    • Each item has its own review history, so, for example, reviewing “Networking Fundamentals” might resurface key details over time without flooding the user with excessive repetition.

3. Deep Dive & Contextual Exploration

  • Topic Map & Knowledge Graph:
    • All content is organized in a dynamic knowledge graph, visually displaying connections between concepts. Users can explore related items by traversing this graph.
    • For example, selecting “machine learning” on the knowledge graph may show links to notes, articles, relevant project plans, videos, and external resources.
  • Deep Dive Mode:
    • Users can activate “Deep Dive Mode” on a topic, which immerses them in all content related to that topic, presenting layered or hierarchical levels of information. It organizes content to progress from foundational knowledge to more advanced aspects.
    • It could automatically pull in summaries, quizzes, and visualizations (charts, flow diagrams) of the information.
  • Suggested Exploration Paths:
    • Based on the user’s interests and recent activity, the OS suggests exploration paths (like related topics, advanced material, or practical applications). These are optional and customizable for self-guided learning.

4. Voice Control, Dictation, and Text-to-Speech (TTS)

  • Natural Language Voice Assistant:
    • A dedicated voice assistant facilitates content retrieval, scheduling, and navigation across the OS. For instance, “Show me notes on machine learning,” or “Schedule a review for the Python project notes.”
    • The assistant could also handle dictation for taking notes and reminders, and use TTS for reviewing material aloud—ideal for auditory learners or hands-free usage.
  • Contextual Voice Commands:
    • Voice commands adapt to context, so if a user is reading about a topic, they can say, “Deep dive on this,” or “Add this to review schedule,” and the OS interprets these based on the current content.
  • Dictation Integration:
    • Dictation captures spoken notes or ideas. These notes are auto-transcribed and integrated into the knowledge base, where users can later link or tag them for future review.

5. Smart Planning & Document Integration

  • Project-Based Document Management:
    • Each document or project can be a combination of related notes, files, links, and action items. Users can organize documents around specific learning goals, linking relevant notes or resources.
    • Users can turn parts of documents into learning items or spaced repetition cards, associating tasks with review schedules.
  • Planning & Scheduling Integration:
    • The OS has a smart calendar where users can schedule deep-dive sessions, block review times, and set goals for learning topics.
    • Integrated reminders for scheduled learning sessions or spaced repetitions help users stay on track.

6. Contextual and Semantic Search

  • Keyword and Semantic Search:
    • The OS supports keyword-based search as well as semantic search using NLP. Semantic search allows users to retrieve information based on concepts rather than exact keywords. For example, searching for “data privacy” could yield results on “data protection,” “GDPR,” and “information security.”
  • Smart Suggestions & Quick Access:
    • The OS suggests related content or unfinished reviews based on recent activity. Quick Access shortcuts let users navigate to frequently used or scheduled items effortlessly.

Functional Flow and User Experience

  1. Content Capture & Learning Item Creation:
  • Users capture notes, documents, or web clippings and flag sections as learning items. Each item is categorized, tagged, and added to the knowledge base, and can be scheduled for spaced repetition.
  1. Personalized Learning & Scheduling:
  • The OS tracks user interactions with each item, adjusting review frequency and suggesting additional material based on user interest and retention.
  1. Seamless Exploration:
  • Users engage in deep dives by selecting any concept, where they’re presented with a layered content view, suggested exploration paths, and quizzes to reinforce retention.
  1. Voice Interaction for Enhanced Productivity:
  • The user dictates notes, retrieves information, or initiates spaced reviews hands-free. TTS functionality allows users to listen to content, such as summarized notes, making learning more versatile.
  1. Progress Tracking & Analytics:
  • The OS visualizes learning progress with metrics on retention, knowledge gaps, and upcoming reviews, helping users see their growth and adjust their learning plans.

Suggested Technologies for the L-DOS

  • Core OS and UI: Built with a language like Rust or Go for efficiency, with a modern, responsive UI framework (React Native for cross-platform support).
  • Knowledge Graph & Contextual Search: Graph databases (e.g., Neo4j) for knowledge mapping and Elasticsearch for keyword and semantic search.
  • NLP and Machine Learning Models: Custom NLP models for semantic search, spaced repetition algorithms, and TTS models like Tacotron or WaveNet.
  • Voice Control and Dictation: Integrated with open-source ASR (automatic speech recognition) frameworks like Vosk or Mozilla DeepSpeech.

Final Summary

The Learning-Driven OS (L-DOS) provides a deeply interconnected, knowledge-focused environment for learning, planning, and content management. Spaced repetition, deep dive features, and powerful search capabilities make learning an immersive, personalized experience. With voice integration, the OS enables hands-free interactions and TTS for auditory reinforcement, adapting to various learning styles and user needs.

This OS would be a true personal knowledge assistant, making learning, exploration, and organization an integral part of the computing experience.

3 Likes

Very Interesting. thx! So this approach builds hierarchical knowledge domains and then applies algorithms that change the weights of multiple connected items when you review. I am wondering if future AI will be able to build such knowledge domains itself and then apply such algorithms. I also asked ChatGPT about it:


Will Advanced AI Be Able to Autonomously Build Systems like FIRe?

With the exponential growth in AI capabilities, particularly in large language and multimodal models, there’s increasing potential for AI to autonomously construct systems like the FIRe model. FIRe’s essence lies in its ability to map hierarchical knowledge structures, track implicit relationships between topics, and tailor spaced repetition scheduling based on these semantic connections. Let’s explore how advanced AI might be able to autonomously build and refine such a system:

1. Autonomous Domain Knowledge Graphs

  • Emerging Abilities in Hierarchical Structuring: Current AI models can already create implicit “knowledge maps” by clustering related concepts in latent space. For example, if trained on mathematics or biology, an AI could identify natural hierarchies, such as calculus prerequisites or molecular interactions. These clusters resemble nodes in a knowledge graph, similar to FIRe’s encompassing graph structure.
  • Future Potential: Advanced AI could autonomously develop semantic, domain-specific knowledge graphs for fields like physics or economics. These graphs could dynamically update as the model learns new information, automatically encoding relationships like “topic X encompasses skills Y and Z.” This could enable a truly adaptable, FIRe-like system that maps prerequisites, dependencies, and knowledge flows across vast domains.

2. Implicit Relationships and Fractional Repetition

  • Trickle-Down Credit and Penalty Flow: The FIRe model’s unique feature is its ability to “trickle down” repetition credit or “flow up” penalties based on implicit learning through advanced topics. As AI models continue improving, they may autonomously identify these implicit relationships. For example, solving complex integrals in calculus might automatically reinforce foundational algebra concepts—something an advanced AI could detect and factor into a spaced repetition schedule.
  • Emergent Patterns in AI Models: Just as today’s models can perform zero-shot translations or generate coherent text in unfamiliar domains, future AIs could recognize when advanced knowledge implies mastery of simpler concepts. This would allow the AI to apply fractional repetition credits autonomously, dynamically adjusting review intervals across interconnected topics without needing manual weight adjustments.

3. Self-Tuning Spaced Repetition Algorithms

  • Learning Speed Calibration: FIRe personalizes spaced repetition timing by adjusting for both topic difficulty and student ability. Advanced AI models could autonomously track user performance and dynamically adjust learning intervals. By recognizing patterns in how users retain information across topics, the AI could automatically create personalized review schedules, effectively replicating FIRe’s core spaced repetition algorithm.
  • Reinforcement Learning for Optimization: Future AIs could use reinforcement learning to optimize spaced repetition, adjusting intervals to maximize long-term retention. This goes beyond simple “correct/incorrect” responses, as the AI would adapt based on how users perform on both advanced and foundational topics. This would create a highly individualized system that self-optimizes over time, refining spaced repetition intervals through direct observation of retention patterns.

4. Handling Vast, Diverse Domains for Universal Knowledge Graphs

  • Cross-Disciplinary Knowledge Integration: Advanced AI models like GPT-4 exhibit an ability to connect concepts across domains, such as linking mathematical principles to physics or biology. With future scaling, AI could autonomously build universal knowledge graphs that bridge concepts from diverse fields—something like a universal FIRe model. Imagine a single system that understands how principles in programming relate to logical reasoning in math or even causal patterns in economics.
  • Self-Expanding Knowledge Structures: By continuously learning from new data, future AI models could autonomously expand these graphs, recognizing new topics and connections. This could apply to any subject, from organic chemistry to legal case law, enabling AI to dynamically “grow” its FIRe-like structures and apply spaced repetition across vast, interdisciplinary knowledge bases.

5. Challenges and Future Directions

  • Complexity in Fractional Relationships: While AI could theoretically identify implicit relationships, creating fine-tuned fractional credits in spaced repetition would still be complex. Some aspects, like nuanced dependencies in advanced math, may require more specific training.
  • Data Availability for Specialized Fields: Fields with less publicly available data, such as proprietary industrial knowledge, may limit AI’s ability to fully capture domain-specific hierarchies autonomously. Advanced models would require access to specialized datasets to truly excel in such fields.

Final Thoughts

In summary, it’s increasingly plausible that advanced AI could autonomously develop FIRe-like systems, building rich, hierarchical knowledge graphs, managing implicit topic dependencies, and dynamically calibrating spaced repetition for diverse fields. This future isn’t far off; with exponential advancements in AI scale, self-learning, and multimodal understanding, a fully autonomous FIRe system may soon become reality, transforming how we approach learning and retention across all knowledge domains.

1 Like

Interesting idea from a user on reddit.

https://www.reddit.com/r/ollama/comments/1gxclhu/its_not_much_but_i_built_conversationmind_a_tool/

Hey all! I wanted to share a little tool I’ve been working on to make sense of my AI conversations (currently at 850+ and counting). It started as a simple “huh, I should organize these” project and evolved into ConversationMind.

What it does:

  • Aggregates chats from ChatGPT, Claude, and local LLMs (Ollama&OpenWebUI/LM Studio)
  • Maps conversation topics and relationships
  • Enables search across all AI interactions
  • Visualizes patterns and topic clusters

The end goal is to turn this into a comprehensive RAG system.

I’m using it to:

  • Improve my own prompt engineering
  • Track how different LLMs handle similar queries
  • Build better context for future AI interactions
  • Create training datasets from real conversations
  • Fune Tune an LM locally on my info

Still very much a WIP, but sharing in case others are working on similar ideas. Happy to elaborate on the technical details if anyone’s interested!

Interested in seeing it on git?

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.