Posts By gg


What is coccinella?

Coccinella is an innovative open-source framework developed for high-throughput behavioral analysis. Leveraging the power of distributed microcomputers, it facilitates real-time tracking of small animals, such as Drosophila melanogaster. Complementing this tracking capability, coccinella employs advanced statistical learning techniques to decipher and categorize observed behaviors. Unlike many high-resolution systems that often require significant resources and may compromise on throughput, coccinella strikes a balance, offering both precision and efficiency. Built upon the foundation of ethoscopes, this platform extracts minimalist yet crucial information from behavioral paradigms. Notably, in comparative studies, coccinella has demonstrated superior performance in recognizing pharmacobehavioral patterns, achieving this at a fraction of the cost of other state-of-the-art systems. This framework promises to complement current ethomics tools by providing a cost-effective, efficient, and precise tool for behavioral research. Coccinella analysis can be done in ethoscopy, a Python framework for analysis of ethoscope data.

How does it work?

Coccinella uses ethoscopes to extract information about the activity of flies in real time. Ethoscopes are machines that use distributed computing via Raspberry PI to detect and interfere with behaviour. Given the off-the-shelf nature of the devices, the all setup is inexpensive and scales up very easily. As term of reference: our lab currently employs about 100 ethoscopes, with a processing power of 20 flies each.

Data about the activity of the animal are then fed to a high-throughput toolbox for time series analysis called HCTSA or Catch22, initially developed at Imperial College London by our colleagues in the Maths Department. The toolbox performs numerous statistical tests aimed at segregating data in an unsupervised way and can therefore be used to cluster together data that the machine recognises as similar. In our case, we tried to identify which drugs have similar mode of action, potentially recognising and assigning the appropriate pharmacological pathways to new, uncharacterized compounds.

We also compared the performance of coccinella to state of the art systems and found that it performs even better!

This is important from the technical point of view but also from the standpoint of neuroscience because it shows that “less is more” when it comes to extracting and recognising behavioural data. In other words, you don’t need to carefully label posture and movement when characterising behaviour: reducing activity to its minimal terms actually works even better!

Coccinella paper on eLife
Ethoscopy / Ethoscope-lab preprint on bioRxiv
Ethoscopy on GitHub
Ethoscopy on PyPi
Ethoscope-lab Docker container on DockerHub
Jupyter Notebook tutorials for Ethoscopy on GitHub
Ethoscopy and Ethoscope-lab documentation on bookstack
Raw data and all notebooks reproducing the paper’s figures on Zenodo

Divergent evolution of sleep functions

Elephants spend up to 18 hours a day eating grass, bushes, roots, shrubs to maintain their appropriate calorie intake. They sleep only 1 or 2 hours a day. Bats, on the other hand, are believed to sleep more than 20 hours a day. Finally, Great Frigatebird. They would normally sleep 9-10 hours a day and you would have hard time trying to get them to sleep less than that. Unless it’s migratory season. In that case, they sleep 40 minutes a day, while they flies for days and days in a row. Evolution is one of the great mysteries of sleep. Why do some animals require 20 hours, while others can cope with 1 or 2? Whatever sleep function is, how can it be accomplished in 10 hours in one season and 40 minutes in another, as it happens in migratory birds?

We won’t really understand what sleep is and what it does if we keep thinking about it in an anthropocentric way. We need to look at it from the evolutionary standpoint and only then we will be able to grasp what its role in nature is. This work marks our first big attempt in this direction. We did not compare sleep between elephants and bats. Too tricky to keep in the test tube and too evolutionary distant. Instead, we used seven species of Drosophila spanning an evolutionary distance of 5-50 Million years and with different ancestral origins and adaptation niches.

In all of them, we measure sleep using a computerised video tracking system based on Raspberry PIs which can be linked to robots to deliver sensory stimuli in real-time, such as puffs of air or automatic rotations of the test tube to keep them awake. We had actually used this device before to explore how fruitflies recognise and respond to salient stimuli during sleep. Here, we combined those with the excellent hidden Markov chain model initially proposed by the Griffith Lab at Brandeis and were able to confirm that different sleep stages as detected by the Markov chain do indeed coincide with different arousabilities. Deep sleeping flies are harder to wake up!

We found that all species sleep in more or less the same way, although for very different lengths of time. In almost all species, sleep is sexually dimorphic: females sleep only at night and males sleep in the afternoon too. Except for D. virilis: a cosmopolitan species believed to have arisen in the Miocene in the deserts of Afghanistan. Interestingly, this is something very recently found in other desert species too. You probably don’t want to be flying around in a desertic afternoon! So, sleep amount is generally conserved and obviously it adapts to species-specific ecological conditions, exactly as for the elephant and the bat. But what about sleep homeostasis? How do these exotic flies react when we try to keep them awake? For this, we turned to our trusty robots and kept flies awake for 24 hours in a row by rotating their little world around every time the fell asleep. A bit like in the Inception movie. Watch the first tube from the left to see the robot in action.

When you deprive an animal of sleep, it tries to recover some of it ASAP. This is a hallmark of sleep homeostasis and what we observed in D. melanogaster, but not in any of the other species! Like the migratory birds, they suddenly seemed OK not sleeping. No signs of tiredness. And even making our robot work for 7 days in a row – 168 hours – did nothing to them! These other species could stay awake just fine and showed no signs of tiredness. Except for melanogaster, which showed a steady increase in sleep pressure. However, at least some species were able to show rebound sleep when we used a different way of keeping them awake: social stress induced by male-male interaction in a laboratory boxing-ring equivalent. Stress can induce rebound sleep in many species, including rodents, and it does so by activating specific brain circuits as our colleagues recently showed.

Surprisingly, male-male interaction did lead to sleep rebound not just in melanogaster but also, simulans, sechelia, yakuba. Still no signs of homeostasis in the remaining three species though! What decides whether an animal will show homestasis? It seems the answer is in their brains. We found that, in general, sleep rebound correlated with an increase in synaptic strength. All the flies that showed rebound also showed a larger amount of a specific synaptic protein. And conversely, when we remove synaptic proteins from specific parts of the brain involved in learning and memory in D. melanogaster we get a similar effect: no tiredness after sleep deprivation.

We also go on and look at the evolution of pharmacology in these species and much more. Have a go at the manuscript yourself. It’s hopefully easy to read for everyone. hat is the take-home message? Well, we try to figure out what all this means in evolutionary terms. We think sleep has different functions in different species (doh!) and some functions therefore evolved for some species but not others. The one common thing all animals have in common is they all sit on the same planet which has been rotating at the same speed for a very long time. We believe this adaptation created sleep in the first place giving animals a chance to optimise their activities to days & nights. Then, other sleep functions kicked in. Some animals need sleep to cope with stress; some others need sleep to learn better; to memorize; to fight bacteria. Who knows how many different functions there are? Some need sleep for multiple reasons at once. This makes sense on multiple levels and can ultimately explain why elephants can do in 1 hour what bats seem to take 20 hours for!

Ethoscope db files for all the behavioural data in the work (FTP)
Jupyter notebooks and metadata for all the figures in the paper (link)
BRP quantification in Fig. 2 – Images and quantification scripts (link)
Preprint on biorXiv (link)
Twittorial describing the findings (link)

Ethoscopy and Ethoscope-lab

  • Ethoscopy is Python software for analysis of ethoscope data – and more! – created by Laurence Blackhurst
  • Ethoscope-lab is a pre-baked Docker container featuring an installation of the multi user Jupyter Hub with Python and R kernels, ready to be used with Ethoscopy and Rethomics.

Why using ethoscope-lab?

Let me simply explain how we use it in our lab. We arranged a powerful workstation that acts as lab server and run a dockerized ethoscope-lab on it. The workstation has a local copy of all our ethoscope data (about 8 Terabyte as I type) and ethoscope-lab has local access on those, offering the quickest loading time. Users can then use their computer, or tablet to connect to the workstation and perform data analysis directly from the browser. The setup frees them from working at their desk and allows access to their data from anywhere in the world, guaranteeing at the same time the fastest computational performance even when they work on their laptops. Moreover, the system uses Jupyter notebook as default, meaning each analysis can be nicely annotated and exported to be shared with the world post-publication, along the original raw data.

To give a practical example: this series of repositories on zenodo contains the entire dataset of our latest paper (316Gb) and it’s paired to all the notebooks we used to generate each figure. Readers can download the dataset freely, install ethoscope-lab as docker container on any computer (irrespective of the operating system they adopt) and reproduce all our analyses!

Ethoscopy / Ethoscope-lab paper on Bionformatics Advances
Ethoscopy on GitHub
Ethoscopy on PyPi
Ethoscope-lab Docker container on DockerHub
Jupyter Notebook tutorials for Ethoscopy on GitHub
Ethoscopy and Ethoscope-lab documentation on bookstack

What is the best Wordle strategy?

Wordle is a rather popular online code-breaking game that works as a words-based version of the old Mastermind game. If you do not know what wordle is, you may want to read this nice overview of the game and its success published in the Guardian. If you do know what Wordle is then you have probably wondered whether there is a perfect strategy to play it. I wondered the same and dedicated a Sunday evening of my time to run some experimental analysis which I like to share with the wordle world.

The first step was to collect a dictionary of all the five letters words in English. Every day, the wordle javascript picks a word from a list of 2315 words (and accepts further 10657 words as possible entries). A more comprehensive list of 3450 words can be found here.

I then coded a wordle solver: a piece of software that will get the computer to play wordle against itself so that we can run hundreds of thousands of iterations and do some stats (here is a sample notebook on how to use it). The first thing to check is to calculate the null hypothesis: how successful would we be if we were to try and solve the puzzle using completely random words at every attempt? The answer, based on a Montecarlo repeat of 100k games, is 0.25%. That means we would fail 99.75% of the time. We can certainly do better than that!

Cat Typing GIFs | Tenor

We can now go and evaluate different strategies. I am not going to take a brute-force approach for this problem but rather a hypothesis drive one, listing strategies that I believe real users are considering. I evaluate two variables in the strategy: the first variable is whether it makes sense to start with a “smart word”, that is a word that contains a pondered amount of carefully selected letters. For instance, we may want to start with words that contain the most frequently used letters in the list. Analyzing the letter frequency in all the 2315 wordle words, we come down with the following values:

We can now come up with a list of word that uses the most frequent letters, excluding of course words that have double letters inside. The first 20 words in the list for the wordle dataset would be the following ones:


Some of these words actually have the same valence given that they are anagrams of each other but the list provides some sensible starting point for what I will be calling the “smart start strategy”. The alternative to a smart strategy is to be using a random word every time.

The second variable in our strategic considerations is whether it makes sense to use exclusion words. Exclusion words are words that do not contain letters that we have already tested. They allow us to learn more about which letters are or are not present in the final word. For instance, in the two examples below we use a smart word start and then an exclusion word next which not only ignores but actually avoids all the letters used in the first attempt, even when successful. This allows us to learn more about which letters we should be using.

An exclusion strategy could be used for attempts number 2 and attempt number 3 in principle, like in the two examples below:

Obviously, the drawback of using an exclusion strategy is that we are not going to be able to guess the word because we are purposely excluding the letters we know are in the final word!

There are other aspects of the strategy that we should be using that we could in principle test: for instance, when should we start using words with double letters? If we take the smart word start, I believe we should exclude words with double letters and the same of course apply with the exclusion strategy too. So, all in all we can now compare four different strategies:

StrategyUse Smart Start?Exclusion words

We run a Montecarlo simulation of all four strategies, setting 1000 games each and we find the following performance per strategy

success rate: 0.955 0.963 0.964 0.974
average success attempt: 4.322 4.09 4.09 4.263

All strategies perform very well overall, with a success rate above 95%. The distribution of successful attempts is gaussian as expected, with a peak at attempt 4. Obviously, any exclusion strategy will preclude the possibility to guess the word at attempts number 2. So the take-home message is the following: if we want to maximize the probability of finding the word we should use strategy number #4 which gives us the highest success rate of 97%. However, that high success rate comes at the expense of renouncing the possibility of finding a guess at attempts 1-3. If we want to maximize early successful attempts (at the expense of success rate) we should go for strategies 2 or 3.

Interestingly enough, the system can also be used to evaluate how difficult a word is. For instance, in the example below we try to solve the word QUERY (left) or DRINK (right)

To solve the daily wordle

Scientific conferences at the time of COVID

The social restrictions caused by the pandemic have annihilated the galaxy of scientific conferences as we knew it. Many conferences have tried to move online, in the same way we moved 1-to-1 meetings or teaching online: we applied a linear transposition trying to virtually recreate the very experience we were having when traveling physically (for good and for bad). I would argue that the linear translation of a physical conference to the virtual space, has managed to successfully reproduce almost all of the weaknesses, without, however, presenting almost any of the advantages. This is not surprising because we basically tried to enforce old content to a new medium, ignoring the limits and opportunities that the new medium could offer.

Online conferences like online journals

Something similar happened in science before, when journals transitioned from paper to the web: for years, the transition was nothing more than the digital representation of a printed page (and for many journals, this still holds true). In principle, an online journal is not bound to the physical limitation on the number of pages; it can offer new media experiences besides the classical still figure, such as videos or interactive graphs; it could offer customisable user interfaces allowing the reader to pick their favourite format (figures at the end? two columns? huge fonts?); it does not longer differentiate between main and supplementary figures; it costs less to produce and zero to distribute; etc, etc. Yet – how many journals are currently taking full advantage of these opportunities? The only one that springs to mind is eLife and, interestingly enough, it is not a journal that transitioned online but one that was born online to begin with. Transitions always carry a certain amount of inherent inertia with them and the same inertia was probably at play when we started translating physical conferences to the online world. Rather than focusing on the opportunities that the new medium could offer, we spent countless efforts on try to parrot the physical components that were limiting us. In principle, online conferences:

  • Can offer unlimited partecipation: there is no limit to the number of delegates an online conference can accomodate.
  • Can reduce costing considerably: there is no need for a venue and no need for catering; no printed abstract book; no need to reimbourse the speakers travel expenses; unlimimted partecipants imply no registrations, and therefore no costs linked to registration itself. This latter creates an interesting paradox: the moment we decide to limit the number of participants, we introduce an activity that then we must mitigate with registration costs. In other words: we ask people to pay so that we can limit their number, getting the worst of both worlds.
  • Can work asyncronously to operate on multiple time zones. The actual delivery of a talk does not need to happen live, nor does the discussion.
  • Can prompt new formats of deliveries. We are no longer forced to be sitting in a half-lit theatre while following a talk so we may as well abandon video tout-court and transform the talk into a podcast. This is particularly appealing because it moves the focus from the result to its meaning, something that often gets lost in physical conferences.
  • Have no limit on the number of speakers or posters! Asyncronous recording means that everyone can upload a video of themselves presenting their data even if just for a few minutes.
  • Can offer more thoughtful discussions. People are not forced to come up with a question now and there. They can ask questions at their own pace (i.e. listen to the talk, think about it for hours, then confront themselves with the community).

Instead of focusing on these advantages, we tried to recreate a virtual space featuring “rooms”, lagged video interaction, weird poster presentations, and even sponsored boots.

Practical suggestions for an online conference.

Rather than limit myself to discussing what we could do, I shall try to be more constructive and offer some practical suggestions on how all these changes could be implemented. These are the tools I have just used to organise a dummy, free-of-charge, online conference.

For live streaming: Streamyard

The easiest tool to stream live content to multiple platforms (youtube but also facebook or twitch) is streamyard. The price is very accessible (ranging from free to $40 a month) and the service is fully online meaning that neither the organiser not the speakers need to install any software. The video is streamed online on one or multiple platforms simultaneously meaning that anyone can watch for free simply by going to youtube (or facebook). The tutorial below offers a good overview of the potentiality of this tool. Skip to minute 17 to have an idea of what can be done (and how easily it can be achieved). Once videos are streamed on youtube, they will be automatically saved on their platform and can be accessed just like any other youtube video (note: there are probably other platforms like streamyard I am not aware of. I am not affiliated to them in any way).

For asynchronous discussion and interaction between participants: reddit

Reddit is the most popular platform for asynchronous discussions. It is free to use and properly vetted. It is already widely used to discuss science also in an almost-live fashion called AMA (ask-me-anything). An excellent example of a scientific AMA is this one discussing the discoveries of the Horizon Telescope by the scientist team behind them. Another interesting example is this AMA by Nobel Laureate Randy Sheckman. As you browse through it, focus on the medium, not the content. Reddit AMAs are normally meant for dissemination to the lay public but here we are discussing the opportunity of using them for communication with peers.

A dummy online conference.

For sake of example, I have just created a dummy conference on reddit while I was writing this post. You can browse it here. My dummy conference at the moment features one video poster and two talks as well as a live lounge for general discussions. All this was put together in less than 10 minutes and it’s absolutely free. Customisation of the interface is possible, with graphics, CSS, flairs, user attributes etc.

Claude Desplan’s talk at my dummy online conference on reddit.

Notice how there are no costs of registrations, no mailing lists for reaching participants, no zoom links to join. No expenses whatsoever. The talks would be recorded and streamed live via streamyard while the discussion would happen, live or asynchronously, on reddit. Posters could be uploaded directly by the poster’s authors who would then do an AMA on their data.

An (almost) examples of great success: The Worldwide series of seminars.

Both talks in my dummy conference were taken by the World Wide Neuro seminar series, the neuro branch of the world wide science series. Even though this is seminar series, it is as close as it gets to what an online conference could be and the project really has to be commended for its vision and courage. This is not the online transposition of a physical conference but rather a seminar series born from scratch to be online. Videos are still joined via zoom and the series does lack features of interaction which, in my opinion, could be easily added by merging the videos with reddit as I did in my dummy conference example.


It is technically possible to create an online conference at absolutely no expense. The tools one can use are well-vetted and easy to use on all sides. This does require a bit of a paradigm shift but it is less laborious and difficult than one may think.

Sensory processing during sleep in Drosophila Melanogaster

One of the most puzzling aspects of sleep is that it cannot happen without depriving us of our full conscious experience. Whatever the function of sleep is, it cannot be achieved without disconnecting our brains from the external world. A full conscious state and sleep are not compatible, it seems, to the point that one of the definitions of consciousness is that “it is all that fades away when we are in dreamless sleep”.

The fact that the brain has to surrender to the tyranny of sleep is also the main reason why scientists believe (in a rather dogmatic fashion) that sleep is “of the brain, by the brain for the brain“. Yet, even during sleep parts of our brains retain some ability to process external information. In the 1960s, Oswald et al formally showed that sleeping humans could wake up in response to some salient stimuli, such as their names being called, but not in response to stimuli of identical strength but no salience, such as other people’s names or their names played in reverse.

This finding has been confirmed and extended over the decades in the scientific literature, providing evidence that it applies to even more complex nuances of saliency, such as an angry tone of voice. The videos below suggest that scientific literature is certainly less comprehensive and (less amusing) than the phenomenon in its entirety.

Pets wake up to food odours and food-related noises.
And the human brain is certainly able of very deep sensory processing!

Even though we have numerous scientific and anecdotal evidence that animals and humans can wake up to salient sensory stimuli during sleep, hardly anything is known about the biological underpinning of this phenomenon. And here: enter Drosophila melanogaster! What better animal model than flies to dissect this amazing brain property?

In a paper titled “Sensory processing during sleep in Drosophila melanogaster” published in Nature, we introduce flies as the ideal animal model to dive into the biology of how a brain can simultaneously be asleep and respond to external stimuli.

Postdoc Alice French took the lead on this amazing project to show that even flies can recognise salient stimuli in their sleep and react accordingly, modulating their response based on their internal state. We initially expanded the robotic platform we had previously built in the lab, called ethoscopes, which allows us to monitor and interfere with flies using inexpensive @Raspberry_Pi computers. Alice wanted to build a robotic component able to challenge single flies with specific odour but only while they were asleep, to record whether they would wake up or not. She obviously started with…. LEGO!

In our first prototype, we built a robot able to operate a LEGO valve so to send a puff of air to the sleeping fly. LEGO valves were a good start because we needed 500 of them.

The system worked and we went from those early all-LEGO prototypes (left) to the final 3D printed product (right).

Using this ethoscope module we could challenge sleeping flies with different odours and check whether they would respond differently to some of them. We found they did! Flies would respond to 5% acetic acid for instance, but not to 10% acetic acid. Not only that, the valence of the odour could be modulated by internal states. Flies that had received a little starvation were increasing their response specifically to food-related odours. When we gave alcohol to flies, on the other hand, we found drunk Drosophilae were less responsive to odours in general showing somehow a deeper sleep state.

Now, flies are arguably the best animal model to study circuit neuroscience these days. We have a full connectome of the fly brain and countless genetic tools that allow us to turn neurons on and off. So that is what we did. We started turning neurons on and off in the fly brain, looking for some that would modulate their ability to sense stimuli during sleep. We found them!


We actually found the whole circuit, connecting the “fly nose” all the way to the sleep centers in the brain. And when we used thermogenetics to switch those neurons on or off with infrared radiation, we could interfere with that process and make the flies more or less responsive.

In short, we have shown that flies can recognise and respond to odours during sleep, waking up only to those that they consider salient. We also show that this phenomenon is plastic and modulated by internal states, with animals being more likely to wake to food odours after a little starvation. We also described a blueprint for a neuronal circuit that connects the peripheral olfactory receptor neurons all the way to known sleep-regulating centres in the fly brain. We explore three prototypical gate-points that modulate subconscious processing of olfactory information during sleep: two at the periphery and one in the central brain.

The story is important and of general interest for at least three reasons:

  1. for the groundbreaking implications it has on the consciousness field, introducing flies as a model to study
    subconscious processing of information, and providing an experimental paradigm that allows to empirically face some key questions of the field;
  2. for the implications it has on the sleep community, describing the neuronal circuit regulating sensory processing during sleep, a neuronal feature that is poorly understood in any other animal model. Our description of the circuit regulating sensory processing during sleep is the most accurate to date and the work also potential future medical significance, for instance in the study of altered states of consciousness, such as coma;
  3. for the implications it has on the larger neuroscience community, describing how a circuit modulates the processing of sensory information to distinguish valence.

Drosophila has been employed to study arousal threshold many times before. There are many studies in which flies can be used to gauge sleep “depth” by using quantitative mechanical stimuli, such as simple vibration or touch. Our study is the first one to study a more puzzling property: how do we recognise qualitative stimuli during sleep? How do we recognise our own name while unconscious?

The video abstract below provides more information on the contents and the implications of the work.

The full reference to the paper is:

The work was supported by BBSRC and H2020-Marie Curie funding. The lead author of the study is Dr. Alice French.

Nature has featured the paper with a dedicated News & Views by Wahne Li and Alex Keene.

Imperial wrote a little PR piece.

Understanding the links between sleep and well being – lessons from fruit flies

Drosophila melanogaster
, commonly known as fruit fly or more appropriately vinegar fly, is the second most common animal model used in research. Initially established by Thomas Hunt Morgan at the beginning of 1900s to provide an empirical base to the groundbreaking hypotheses of Darwin, flies have contributed to science in every realm, from development to genetics and neuroscience. 6 Nobel prizes have been awarded to flies in the past century, with the last one being awarded in 2018 to three Drosophila researchers for their work in the characterisation of circadian behaviour.

My laboratory uses flies to try and answer one simple, yet fascinating question: why do we sleep? What is sleep for and why humans and all animals seem to require sleep? Our approach is somehow different mainstream one, because flies force us to think to the problem in an unusual and more creative way. So far this has paid off egregiously, and we have managed to make very important discoveries that apply well beyond the fly realm.

One aspect of our research that may be particularly interesting for someone puzzled about the science of physical well being, is that we do not consider sleep to be a prerogative of the brain but actually a phenomenon that affects every part of our body. Whenever we lack sleep – whether for fun or for work – we can feel an effect of sleep deprivation not just on our cognitive performance but on our very body too. This is more than a subjective feeling: it is a well-established phenomenon in humans and animals. Why is it so? Studying sleep deprivation in flies we hope to give an answer to this question.

In particular, we use state of the art technology to a) deprive flies of sleep employing custom-made robots and b) exploring what changes at the cellular level when we lack sleep. How genes, proteins, and other molecules change their composition when we lack sleep?

We can use the same robotic technology to also study physical behaviour in these animals: are they in good shape? Can they climb a wall as they normally do? Can they fly with the same stamina and precision? A great amount of literature in the field of muscular degeneration has been obtained in fruit flies and we have learned a great deal about genes controlling these aspects and how they fail in disease.

A very important corollary of our research is that understanding the functions of sleep opens the door to what we somehow half-jokingly call “the sleep pill”. If could understand what aspects of sleep make us refreshed and performing – both behaviourally and physically – we could then replace sleep pharmacologically. Or we could consolidate the beneficial aspects of sleep to increase its restorative power. To do that, we first need to understand what sleep is and what it does.

What would we do with a philanthropic donation?

We would employ the money to retain a brilliant young research assistant for a year or longer so that she could continue working on her project. The student recently graduated from an MSci in Neuroscience and she joined our laboratory for a summer placement in order to gain first-hand insights into our research. If she could stay longer, she could join and potentiate the research line of the laboratory that looks at the direct consequences of sleep deprivation. I have been working with Imperial Alumni who were kind enough to donate to my laboratory in the past. It has been a great honour and a pleasure and I am looking forward to doing this again.

Video tracking and analysis of sleep in Drosophila melanogaster

Nat Protoc. 2012 Apr 26;7(5):995-1007.
Video tracking and analysis of sleep in Drosophila melanogaster.
Giorgio F. Gilestro

In the past decade, Drosophila has emerged as an ideal model organism for studying the genetic components of sleep as well as its regulation and functions. In fruit flies, sleep can be conveniently estimated by measuring the locomotor activity of the flies using techniques and instruments adapted from the field of circadian behavior. However, proper analysis of sleep requires degrees of spatial and temporal resolution higher than is needed by circadian scientists, as well as different algorithms and software for data analysis. Here I describe how to perform sleep experiments in flies using techniques and software (pySolo and pySolo-Video) previously developed in my laboratory. I focus on computer-assisted video tracking to monitor fly activity. I explain how to plan a sleep analysis experiment that covers the basic aspects of sleep, how to prepare the necessary equipment and how to analyze the data. By using this protocol, a typical sleep analysis experiment can be completed in 5-7 d.

Go to pubmedDownload paper as PDF

pyREM: a crowd trained machine learning approach to automatic analysis of EEG data

pyREM: a crowd trained machine learning approach to automatic analysis of EEG data
Quentin Geissmann, and Giorgio F Gilestro

EEG data are at the basis of a plethora of neuroscientific questions: from sleep to consciousness and attention, many aspects of neuroscience heavily rely on electrophysiological correlates of brain activity. Yet, EEG analysis heavily relies on subjective scoring and interpretation to the point that many neuroscientists consider it an art, more than a systematic tool.
Can we teach this art to a computer?
Attempts at creating an objective way of scoring EEG data have been less than perfect so far, mainly because humans are reluctant about trusting the judgement of a machine, programmed according to hard-coded values and thresholds.

pyREM aims at solving this issue, using a machine learning approach to automatically analyse EEG data. pyREM learns how to classify EEG directly from humans, mimicking all the human’s principles and criteria without any apriori knowledge of what an EEG means. The overall goal of the project is to teach pyREM how 1, 10, 100 or 1000 laboratories score EEG so that the software will be able to automatically grasp and isolate the key fundamental criteria and become, in this way, the universal scorer.

If you are a laboratory interested in being part of this, please get in touch.

If you want to know more, you can

Ethoscopes: An Open Platform For High-Throughput Ethomics

PLOS Biology, 19 Oct 2017; 15(10): e2003026
Ethoscopes: An Open Platform For High-Throughput Ethomics
Quentin Geissmann, Luis Garcia Rodriguez, Esteban J. Beckwith, Alice S. French, Arian R Jamasb, and Giorgio F Gilestro

We present ethoscopes, machines for high-throughput analysis of behaviour in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behaviour using a supervised machine learning algorithm; can deliver behaviourally-triggered stimuli to flies in a feedback-loop mode; are highly customisable and open source. Ethoscopes can be built easily using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at

Online paper on PLoS Biology

Supplementary material.

Supplementary material 1 – webGL model of the ethoscope.
Supplementary material 2 – instruction booklet for the LEGOscope.
Supplementary material 3 – instruction booklet for the PAPERscope.
Supplementary Video 1 – Introduction to the ethoscope platform.
Supplementary Video 2 – The optogenetics component of the optomotor in action.

Featured in: