Berlin Show & Tell #3

On March 26th, the Berlin QS group had its third Show & Tell at Betahaus, with over 60 attendees and four presenters.

First up was Louis Crowe, a medical researcher who discussed new applications for neuromuscular electrical stimulation (NMES). You may be familiar with this from the overhyped “ab belts” from late-night infomercials. As Louis described, the technology also has many therapeutic uses, and it can be more effective as exercise when used at higher voltages — something he’s explored through self-experimentation and tracking, and showed us with a startling live demonstration during his talk.

Cassie Zhen, a graphic designer and fitness trainer, presented on a one-month self-experiment in which she materially improved her cardiovascular fitness — as measured by resting heart rate and VO2 max — through a  rigorously tracked diet and exercise program. Her presentation is available here.

Marc Lallemand, an engineer working at a startup, showed us what he learned from several months of tracking his work hours, alcohol and caffeine consumption, emotional state and various other aspects of his life.

Finally, Florian Schumacher, my co-organizer, presented his results from comparing six different pedometers. Tracking his everyday activity with a Basis Band, Bodymedia Link, Fitbit One, Fuelband, Jawbone Up and LUMOback, he analyzed the difference between the step count these devices had measured and found a total variation of 26.8%. He emphasized the motivational effect pedometers have on his activity level, and pointed out that the devices which build a close feedback loop by making the information easily accessible and understandable are the ones that maximize this benefit.

Florian recently started to work part of his time at a standing desk and began to measure and improve his posture with the LUMOback sensor measured above. By comparing the LUMOback score over several days, he showed how sitting long hours in a conference chair lead to a lot of slouching, while working at the standing desk made it easy for him to keep a healthy posture most of his day.

Finally, Florian presented his results from tracking his time by project, using the OfficeTime app. He found it helpful in both setting better priorities and increasing his focus and concentration.

Why Isn’t Spaced Repetition Software More Popular? (Part 2)

This is adapted from a presentation I gave at the Berlin Quantified Self Meetup in November 2012. There are two parts:

  1. What is spaced repetition software?
  2. Why don’t more people use it, and what can we do about that?

Part 2: Why Isn’t Spaced Rep Software More Popular?

In my previous post, I introduced the field of spaced repetition software; in this one, I’ll discuss why it hasn’t spread further and some new apps that might change that, including one that I’m working on myself.

First of all, I’m not the only person asking this question. Here’s a quote from Gary Wolf, co-founder of the Quantified Self movement, who wrote a great article on Piotr Wozniak and Supermemo:

Why isn’t this amazing technique more common? I explained some of the obvious reasons in my story. Still, I expected that, having launched the idea into an environment well suited to nourish it (many Wired readers are passionate learners, and many of them have software, design, and business skills), I would soon see some new implementations. And I was not disappointed. There are half a dozen versions of Supermemo in common use today. But they are used by very few people. Clearly, the problem remains unsolved.

And the Supermemo wiki has a long list of ideas, some more constructive than others. But they’re clearly aware of the problem.

I have some first-hand experience with this: when I was using Anki and Mnemosyne to study German, I recommended them to many of my classmates, and a few of them got as far as installing the software and loading up my German/English deck (which I would email them) but none of them really stuck with it. And these were smart, relatively computer-savvy people, often with better overall study habits than me, but this kind of software just didn’t quite “click” for them.

Why not? Here are some of those “obvious reasons”…

  • It’s difficult / mentally taxing. No use beating around the bush. These apps do require a certain type of concentration that’s a little uncomfortable at first. But once you get over that small initial hurdle… well, it’s still not Angry Birds exactly, but I find it a lot less difficult (and more satisfying) than other study methods.
  • None of the engagement methods of other educational software, like levels and achievements, social features, or “gamification.” I’m a skeptic on just how engaging (as opposed to distracting) some of these other features are, but there’s no question that some of them help, especially early on.
  • Not enough feedback, or not the right kind. This is a critical one. Notice that the graph I showed you — progress over time — does not come standard in Mnemosyne. I had to record that data every day in Excel and make it myself. You can get “snapshot” charts in the app, but not time-oriented ones, which are a better motivator.
  • Algorithm is too “strict.” If you stop using these apps for a few days, you’ll pile up lots of “overdue” cards. In theory you can ignore them and just take it slow, but there’s still that counter in the corner of the screen reminding you that you’re “behind.” In my mind, a good study app should not try too hard to dictate the pattern of usage, but rather adapt to make the most of whatever time the user puts in.
    There’s also the “one piece of info per card” rule, which I’ll come back to. And because of the moving average nature of the algorithm, it can be hard to convince the software that you really know a card, and you’ll often keep seeing it after you’ve rated it very highly. Finally, when I said earlier that you can define the ratings for yourself, I think some purists would object to that too.

Here are two other reasons that I haven’t seen elsewhere, although to me they’re equally obvious:

  • Still mostly client-side apps, not cloud-based
    • syncing among different devices is annoying
    • risk of data loss
    • compatibility and version issues
  • Still based on user-generated content
    • entering your own cards is time consuming, especially on a phone or tablet
    • downloading other users’ decks isn’t always much better
    • formatted text and other rich content are difficult

The risk of data loss is very real. It’s easy to say “well, you should be backing up everything anyway,” but the reality is that most of us don’t, and it’s hard enough to get users to adopt this one new habit without making them change others.

And the focus on user-generated content is a real problem as well. Most people are just never going to enter their own cards in large numbers. Downloading other users’ decks introduces the risk of typos and grammar mistakes (a huge problem if you’re learning a new language, less so if you’re studying something else) and, more importantly, their content is unlikely to be an exact match for what you want to learn, and it may not be comprehensive or well-formatted. I’ve found that downloading other users’ decks is a good way to try out the software, or try out a new subject, but when I know what I want to learn (say, B1 German vocabulary), I spend as much time editing someone else’s deck as I would in creating my own. And it’s very time-consuming to go beyond a small amount of unformatted text (or maybe a single image) on each side of a card.

Enough complaining — what can we do about it?

Well, here are some encouraging steps forward:

  • Mobile, tablet and web extensions for Mnemosyne and Anki
  • Fully “cloud-based” implementations (Kleio)
  • Shared editing “wiki” models for user-generated content (Memrise)
  • Moving towards professionally-created content (Skritter, Chinese3D)

The mobile and web tools around Mnemosyne and (particularly) Anki are getting better all the time. There are also some people building fully cloud-based implementations, and I recommend Kleio if you want to try one. Memrise uses some other mnemonic methods in addition to a form of spaced repetition, but in any case they’re the first app I’m aware of that lets users edit the same shared decks, so their corrections and tweaks are cumulative. This also allows a lot of “meta” content, with more than one piece of information per card. I think these things have a lot to do with the fact that they’ve achieved a relatively large user base in a short period of time, and (arguably) wider name recognition than the other apps I’ve been talking about.

The alternative to the “wiki” approach, of course, is professional content, and I know of at least two apps that do that for Asian languages: Skritter and Chinese3D. In this case, fixed professional content allows much more intricate graphical interfaces and user interaction, including drawing the actual characters on the screen in Skritter. If you’re studying Chinese or Japanese, I recommend checking these out.

And finally, I’ve developed my own prototype for European languages, with each pair of languages built from the ground up by professional translators and tagged according to real-world metrics (like the A1-C2 European language standards, or particular official exams). I’ve also simplified the algorithm and rating scale, added lots of extra content and formatting to each card, and made various other attempts to address the issues above.

The first “deck” is German for English speakers, which is now up to several thousand cards. We’ve got the web app up and running, and we’re starting on native mobile apps now.



I won’t go into more detail about my project here, but if you’re learning German and want to be a beta tester, send me an email at and I’ll get you set up.

In any case, if you’re new to the subject of spaced repetition, I hope I’ve gotten you interested enough to try at least one of the tools I’ve mentioned. If you’re a user or creator of a tool that I haven’t mentioned, I’d love to hear about it and would be happy to append it to this post. And if you have other questions or just want to discuss (or disagree with) anything I’ve written here, just drop me a line.

Why Isn’t Spaced Repetition Software More Popular? (Part 1)

This is adapted from a presentation I gave at the Berlin Quantified Self Meetup in November 2012. There are two parts:

  1. What is spaced repetition software?
  2. Why don’t more people use it, and what can we do about that?

Part 1: What is Spaced Repetition Software?

Spaced repetition software is a simple studying tool that works incredibly well, especially for learning languages. But it’s been around for over twenty years and – despite some encouraging signs in recent years – still hasn’t spread beyond a small group of nerds like myself.

It’s basically just flashcards with rating buttons. After you see each card, you rate your knowledge of it on a numeric scale. The example below is for someone learning German; the German word appears on the top of the screen and you try to think of the English word. When you think you know it (or when you’ve given up) you hit Enter, and the answer appears on the bottom of the screen, along with these numbered rating buttons. And once you’ve rated the card, you see the next one.


Now, this particular scale goes from one to four. If you knew the answer right away, you’d rate it a four. If you had no idea, it would be a one. If you were close – maybe you thought “chair” – you might rate it a two. These are self-ratings, remember, so you can decide what they mean, as long as you’re consistent and they form a continuous scale.

As you may have guessed, there’s an algorithm in the background that uses these ratings to decide when to schedule the card again. So if you rate it one, you might see it again ten cards later. If you rate it four, you might not see it again for a week or two.

This is not rocket science – in fact, it’s the same thing you’d do with paper flashcards, taking out the ones you know best and reviewing the others more frequently. The basic algorithm, which was written in 1987 by a Polish graduate student named Piotr Wozniak, is short and simple and any competent programmer could probably write their own rough implementation of it in a pretty short time. In fact, several of the programmers I know who study this way have done just that.

Similar “spacing” techniques had been implemented with older analog study methods, like the Pimsleur tapes of the ’70s, but Wozniak realized that using a computer to manage this process — and introducing user-specific feedback, rather than relying on standard spacing intervals — would result in a quantum leap in efficiency. And it does. I used this method two years ago when I was learning German, and I’d say I learned just as much this way as I did in five months of intensive language classes. And that’s comparing three and a half hours a day in class to just twenty minutes a day with the cards. But for this article, I did a more self-contained experiment, to show you how the whole process works from start to finish.

A Self-Experiment: October 2012

So I decided to read a book in German, look up all the words I didn’t know, and learn as many as possible over a one-month period. Here’s the book I chose, a translation of an American crime novel from the 1940s:


Step one was to mark the unfamiliar words and look them up:


Step two was to enter them into the software:


And step three was to review and rate the scheduled cards every day:


The rating scale here was zero to five, and you can’t finish a session without having rated every card a two or better. Here’s how I defined the rating scale:

       0: [unused]
       1: got it wrong
       2: got it right, but only because I just saw it
       3: got it right, but had to think about it
       4: knew it right away
       5: actually getting a bit annoying

And finally, here are the results:


The stacked bars show the total cards in my “deck,” broken down by rating. As you can see, I was adding 20-80 cards every day for twenty days, about a chapter a day, and when I finished the book, the last ten days were just spent reviewing cards.

You can see at the end that according to my own target of 4 or better, I had learned well over 500 new German words by the end of the month. And I knew these words outside the context of these drills, recognized many of them in other settings, and even managed to use a few in everyday conversation.

So, how long did it take? The following graph shows, in minutes how much time I spent on entering and rating cards. The average was about twelve minutes a day in total:


Now, I didn’t track the time I spent reading and looking up the words, because where the content comes from is a whole separate thing. You could even just skip this step entirely and buy a vocabulary list, which is what I did when I first started using this software as a beginner:


But of course, the process of encountering the words in context and looking them up was also helping me learn them, and without that, I bet that some words would have required a few extra repetitions. So if you’re starting totally cold, you may want to allow for a bit more time to get results like this.

Even if you do, I don’t know any other method that even comes close to this kind of efficiency. For example, here’s what that Hueber book has to say…


I can imagine spending twice as much time doing that and not making the same kind of progress. But even if it worked, it would be much harder to stick with it without this kind of quantitative feedback showing daily improvement. It may sound silly, but the best part of every card review session for me was typing in the new scores and seeing another bar show up on this chart.

Want to try it yourself? Here are the “big three” existing spaced rep programs:

  • Supermemo (old versions free, new versions $40-60)
  • Mnemosyne (free and open source)
  • Anki (Desktop version free, iOS client $25)

Supermemo is the oldest, developed by Wozniak himself. It’s by far the most powerful, with a huge feature set, though it’s consequently harder to learn. Mnemosyne and Anki are two more recent open-source projects that are much simpler and easier to get started with.

Mnemosyne is my personal favorite, and the source of the screenshots that you just saw. But I’ve tried all three and they all work well; it’s largely a matter of taste.

In Part 2, I’ll talk about why these apps haven’t become more widespread, and discuss some newer alternatives that take different approaches. Continue to Part 2 –>

Review of Show & Tell #1

(translated from Arne Tensfeldt’s original post in German)

The first Show & Tell meetup of the Berlin International Quantified Self Group took place on November 22nd. The Berlin QS Group had been founded several weeks earlier for the English-speaking Berlin community. With around 70 participants, the meetup was the largest to date in Germany, having been announced and covered by a variety of Berlin blogs and news sites. The meeting began with the standard “three word introduction,” in which everyone present introduced themselves with just three of their interests or other descriptive words. The variety of chosen descriptions and interests reinforced the wide range of the QS movement and offered a good preview of the subjects that would be discussed over the rest of the evening.

Steve Dean

After this introduction, Steve Dean (head of the NY Quantified Self Group) began with a keynote speech about the founding of the Quantified Self movement as well as his own experience in preparing for an Ironman Triathlon. Through the measurement of his resting pulse every morning, recommended by his trainer, he was able to predict when he would get sick from overtraining and reschedule his workouts to allow more rest at the right times. He then discussed a second self-experiment that was also shaped by his athletic pursuits: after the end of his intensive Ironman training, he suffered from an inflammation of his eyelids. After countless unsuccessful treatment attempts, he learned from careful self-tracking that the regular exposure to chlorine from swimming had been keeping this problem in check — and after a long break, resuming his visits to the pool led to a recovery from the infection. His slide presentation can be seen here.

Max Kossatz

Max Kossatz, CEO of Archify, showed the data he had gathered from his company’s newly developed browser plugin. Archify tracks each website that a user visits, saving all text content and capturing a screenshot. This leads to a type of digital “mindfile” which can be easily searched in the future. Max presented an analysis of his own online content in his personal project “My Online Life for the Last 8 Months.” In addition to showing his preferred sources of information, this data also made it possible to recognize patterns like the drastic reduction of online time during his vacation, or an increase in online activity during his preparation for important business events. His presentation can be seen here.

Peter Lewis

As the third speaker, Peter Lewis (co-organizer of the Berlin QS group) showed his experience with spaced repetition algorithms to optimize learning efficiency, which he had used in learning languages. As a starting point, he set up an experiment in which he (as a native English speaker) tried to acquire all the new vocabulary he found in a German novel — about 900 words — within a period of one month. He demonstrated the use of software that allowed him to track his progress through decks of digital flashcards. With an excursion into theory and algorithms, as well as practical explanations and tips on the current state of the technology, he gave a comprehensive overview of spaced repetition software applications and and the different ways to use them.

After the lectures, the attendees had the chance to view demos from some Berlin startups in the QS field and to make new contacts as well. The event was also recorded by the TV show Planetopia; their episode on QS aired on Monday, December 3rd.

The organizers of the Berlin group are already planning their next meeting: in January’s Show & Tell there will be numerous speakers on subjects like genome sequencing, health and biohacking, as well as another Demo Hour with projects and startups from the QS scene.

Participants Sought for Study on Quantified Self Movement

This is a guest post by Marcia Nißen, who’s examining the Quantified Self movement as part of her Bachelor’s thesis.

Why do you measure yourselves? How much time do you invest in self measuring? What do you track and how do you track it? I haven’t been able to get my mind off these questions since getting involved with the Quantified Self movement. In my Bachelor’s thesis, I look into individual motivations and motives for self measurement and now need your help. If you measure or record anything in your life, I would appreciate it if you would take part in my survey.

Filling out the questionnaire should take 15 to 20 minutes. Your responses will remain anonymous and will of course not be given to third parties. I will be glad to provide the Quantified Self community with all results and findings from this data when I finish my  thesis. You can find the survey here:

Marcia Nißen studies industrial engineering at the Karlsruher Institut für Technologie (KIT) and is currently writing her Bachelor’s thesis on the topic “Self-Tracking Activities and Motivations”. Since September 2012 she has been blogging at about the progress of her undergraduate work and phenomena she encounters on the topic of Quantified Self.

[Thanks to Edward Tanguay for the English translation of this post]

Types of Self-Tracking

Like most of us, I found my way into QS “from the bottom up” — that is, I got interested in a particular type of self-tracking (in my case, spaced repetition software) and then discovered this larger movement. Since then, and particularly in organizing our first Berlin meetup, I’ve learned a lot more about the wide range of motivations that bring people to self-tracking and QS. For those who are new to the subject, here’s my own (probably incomplete) list of some of the difference types of self-tracking, with a few examples of each.

1. Motivational: You know what you want and how to achieve it, and tracking yourself will help you to actually do it. The focus is often on social features and/or habit formation, but the mechanisms can be more complex: Phil Libin (of Evernote) lost 30 pounds just by tracking his weight on an Excel chart, without making any deliberate changes to his diet or routine. A lot of the best-known products in this vein are focused on fitness, like Runkeeper or Nike+.

2. Facilitative: You know what you want and how to achieve it, but without tracking yourself it would be very difficult and/or inefficient. This is particularly common with SRS software and other study tools, and also with management of chronic medical conditions, e.g. MySugr and other blood glucose trackers.

3. Experimental: You know what you want but not how to achieve it, and tracking is a way to compare potential methods and/or generate ideas for new ones. The authority on self-experimentation is Seth Roberts, whose blog I’ve been reading for years, and in my opinion the best place to start is his 2004 paper on the subject or his book excerpt here.

4. Documentary: You don’t have a specific goal in mind, you just want to get a more accurate, complete and/or unbiased view of your life/health/time/etc. Can be a way of generating new goals and methods, but can also be a purely aesthetic or self-discovery project. The best-known example of this is probably Noah Kalina’s Everyday video on Youtube, which was popular enough to inspire a Simpsons parody. This recent QS presentation by Sharla Sava covers a related self-photography project and the things she learned from it.

5. Collaborative: Your individual data might not tell you very much, but you’re collecting it to contribute to an aggregate data set for research purposes — often because you have a personal interest in the research subject, but maybe just because you’re a volunteer who wants to help out. A good example would be the openSNP project, which I learned about from this interview at the main QS site.

Of course, many tracking habits and tools fall into more than one of these categories. A lot of the self-improvement literature that overlaps with QS (like that of Tim Ferriss) involves a combination of the first three categories. The same is true for RescueTime, Getting Things Done and many other productivity tools and systems. And the “collaborative” and “experimental” categories often overlap as well, as in networks like CureTogether or the data collected by Roberts from people following his popular diet.

In talking to other self-quantifiers, I’ve already noticed that many of us at first don’t “get” the people who are doing it for very different reasons. As organizers, it’s part of our job to strike a balance among these different self-tracking goals in the mix of presentations and demos at our meetups — but for everyone, it’s also good to realize that they’re not only equally valid goals, but can often complement each other as well.

Welcome to the Quantified Self Berlin Group

This is a brief introduction for members of the new English-language Quantified Self group in Berlin. Of course, we hope to meet many of you in person at our first meetup on Thursday Nov. 22nd.

First, if you’re new to the overall Quantified Self movement, head over to their main website at and start with the About section or the Three Prime Questions.

Second: other than this site, the main way to keep up with our Berlin-area events is via our Meetup group or by following us on Twitter @QS_Berlin.

Bear with us while we sort out the exact structure of this site going forward, but at a minimum you’ll soon be able to follow the English- and German-language posts separately, and we’ll try to translate the most important posts so they’re available in both languages.

We’ll be posting more info here before and after this week’s meeting, and we’ll be covering a lot more at that meeting about how to sign up for future presentations and demos and how the group will operate. In the meantime, feel free to contact me (at or the other organizers with any questions about the new group.