Advisory committee meetings are held once every year (or twice every year, if the student or the committee chooses to do so) to asses the progress of a grad student’s PhD thesis. The meeting involves a written report that is to be submitted to the committee a week prior to the meeting and an oral presentation on the D-Day. During the presentation, the validity of the research work is thoroughly discussed along with the future direction(s) of the project(s) being undertaken. The advisory committee meetings are extremely important for the successful advancement and completion of a thesis – it is where brutal yet honest feedback is conveyed. We as grad students are forced to think critically of our work and defend our hypotheses as well as our results.
My first advisory committee meeting was an intense two-hour long session on a rather dull Tuesday afternoon. As I explained the premise of my work and my goals for the next year, my committee members brought up important questions that I had not previously ever considered. All the members of my committee, including my advisor, were supportive and encouraging. I learned some valuable lessons from the entire experience and got some great feedback from everyone. Some interesting and important points highlighted in my feedback assessment were –
Think carefully about how to present data and set up an argument in my presentation.
Work on clearly identifying the premise that sets the stage for my hypotheses.
Be critical about my data.
Continue to read literature: more reading, and reading more critically.
Focus on developing more robust immunological assays to answer the questions in my aims.
Interact more with colleagues on campus and at other schools to learn and get insight into techniques and relevant assays (wrt understanding what works and what doesn’t).
Explaining the experiments in detail before delving into my results (every assay is unique and has a question to be answered).
Think about how I want to present the previous studies done in the field that are relevant to my questions.
My hypotheses should be provided with a context (what is the data in support or against my hypotheses?)
These were just some of the significant parts of the feedback that I received. Now it’s time to put these into action and definitely work on continuing to build on my project more confidently. More later.
I recently came across this figure that shows the key metabolic processes that dictates an immune cell behavior and function. Biochemists and pharmacologists sometimes focus on one or two key pathways in a disease model and forget that proteins don’t function in isolation. Protein networks are complex pathways with many overlays. A drug designed to inhibit or activate a specific protein can also affect other proteins in the connected pathways. This figure is focussed on an immune cell (natural killer cell) and its interaction with a tumor cell. The interplay between the different metabolic pathways applies to all kinds of cells in the body.
This figure is also quite interesting to me because I have been studying the arginase-1 (Arg1) pathway in microglial cells and this gives me a brief overview of where my study lies in the spectrum of key cellular metabolic pathways. Arg1 is an enzyme that metabolizes L-arginine to L-ornithine and urea in the urea cycle. With the help of ornithine decarboxylase (ODC), L-ornithine further makes polyamines that are important (? – it depends) for cell growth and survival (? – it depends). I think it is quite interesting to see how Arg1 and ODC would dictate the phenotypes of the microglial cells in the brain. Microglia are the brain’s resident immune cells – they chew up all the toxic stuff and get rid of them (this is known as phagocytosis). We have always studied these cells based on their two active states (M1 or M2). There has been evidence in the recent years to show that these cells in fact may exhibit multiple activated states (not just M1 and M2). Just like many immune cells in the body that exhibit a heterogenous phenotype, microglia in the brain may be no different. I’m curious if Arg1 and ODC may be involved in regulating a similar mechanism in microglial cells during neurodegeneration..
Source: Renner K., Singer K., et al. Metabolic Hallmarks of Tumor and Immune Cells in the Tumor Microenvironment. Front Immunol. 2017; 8: 248.
Hello all! I wanted to take a few minutes to write something for the brain awareness week. This is important to me because my research focusses on understanding the role of the immune system in the brain. For a very long time, the brain was thought to be an “immune privileged” organ i.e., it was thought that the brain is protected from all the peripheral insults and that it is “divorced” from the rest of the body. In 2015, it was shown that there exists certain lymphatic vessels that connect the CNS to the rest of the body (1). The lymphatic system carries immune cells through a network of vessels and tissues; it connects the bloodstream and tissues in order to remove dead cells and other debris. The discovery of the new “glymphatic system” has opened new avenues to study the connection between the brain and the rest of the body. This is especially helpful in understanding the role of the peripheral immune system on the CNS during infections, injury, and other disease insults.
My work focusses on a specific cell type in the brain known as microglia which are are the resident macrophages of the CNS (they eat up and clear out the bad stuff in the brain like dead cells and mis-folded proteins). Microglia are the only known immune cells of the brain. Compared to all that’s known about the cells of our body’s immune system (B cells, T cells, NK cells, neutrophils, basophils, Treg cells, MDSCs, TH1, TH2, and many many more with several subtypes of each cell), it is safe to say that cells of the CNS are poorly understood. My efforts are focussed towards understanding the role of microglial cells in neurodegenerative diseases such as Alzheimer’s Diseases (AD) , Parkinson’s Disease (PD), Multiple Sclerosis (MS), etcetera. These diseases are characterized by mis-folded proteins that aggregate in the different regions of the brain tissues causing the neurons to degenerate and eventually die. The microglial cells in these disorders play a major role in disease progression by regulating many pathways involved in cell-cell communication, cell survival, and cell death. This is a relatively new and an exciting area of study with many missing links and questions to be answered. I will try my best to keep this space alive with updates and stories! In the meantime, here’s a fun read on Leonardo da Vinci’s contributions to neuroscience: http://www.sciencedirect.com/science/article/pii/S0166223600021214
And here’s a 1504-1506 drawing of the human brain by da Vinci:
Louveau A, et al. Structural and functional features of central nervous system lymphatic vessels. Nature. 2015;523(7560):337–341. doi: 10.1038/nature14432.
Guess what? I successfully powered through my first year of grad school! My first year was all about rotating from one lab to another in hopes of finding a permanent home where I would metamorphose from being a timid first-year grad student into a fearless, hopeful, and an optimistic researcher powered by data and caffeine.
I cannot believe how much I underestimated the process moving forward. Between taking courses (and therefore preparing for exams and working on assignments), attending seminars, teaching two labs (and two recitations, one office hour, plus all that grading), writing grants and fellowship applications, AND doing my own research in any time that I find in between, it has been a CRAZY semester so far. One of the most disheartening things is how much behind I am on my reading. I am usually so tired by the end of the day that my brain freezes and will not take in any new information that’s thrown at it. My eyes burn down, my legs become numb, and my back starts yearning to crash on my cozy bed as soon as I get home. The papers keep piling up, experiments haunt me in my dreams (the night before every rat dissection, episodes of drug treatments and protein assays flash before my eyes!) and I dread the 1:1 meetings with my PI having no data to report or no hypotheses to discuss. Is this normal for a second-year grad student? I don’t know. I am trying to make up for all the research time lost due to coursework and teaching by working till late evenings and on the weekends. There is no difference between a Friday and a Saturday or a Sunday anymore. Is this grad life? Are we more than just grad students?
A faint silver lining amidst this craziness has been the fact that I have started to formulate the research direction I want to pursue my main Ph.D. thesis on. Of course, I have been working on other projects on the side, but I have now started to connect the dots and evaluate my main project in terms of its novelty, idea, and the required experimental framework. I realized that the more I write about my work in grant/applications or the more I attempt to justify it, I start to identify the gaps in knowledge that needs to be filled. This is truly exciting. The funny thing is, I sometimes wish there was a guidebook that could tell me exactly what I need to think or how I should approach a problem. Unfortunately, there isn’t one. There is so much knowledge out there, but no guidelines for using it. Maybe this is what its all about?
Almost a year has passed since I started my PhD journey in the land of snow and maize. After four long lab rotations across three departments and hopping from one project to another, it is time to pick a permanent lab and a research direction.
I am happy to announce that I have officially joined the distinguished Department of Chemistry at my university and have begun my research at the Center for Drug Discovery where I will work for the remaining period of my doctoral degree. I couldn’t be happier with my decision which was mainly determined by three aspects – my advisor/mentor, the research area and the lab (environment and members). It feels good to finally know where I am headed towards and not feel lost or uncertain. Every one of my rotations were unique and helped me learn the nitty-gritty of grad school. Moving forward, I will focus on brain-related disorders like Alzheimer’s Disease and work towards understanding a tiny piece of a large puzzle which may aid in finding a cure/prevention/slow down progression of the disease. Specifically, the overarching theme of my work will be to identify and test compounds predicted for the disease by taking into account all the possible interactions between biomolecules in the protein universe (aka the proteome). Traditional drug discovery methods involve targeting a specific protein or a specific pathway and thereby limiting the possibility of finding successful leads. In reality, we know that one biomolecule interacts with several other biomolecules in several different pathways. Interactome-based drug discovery is promising because of its broader and quicker approach compared to the other mainstream pipelines that exists today.
One other major factor that helped me decide my major lab was the computation aspect involved in drug discovery research. Taking the challenging Computational Chemistry course this semester helped me take the first step towards learning about some of the components of computer-aided drug discovery. It is amazing how the two channels of research (wet lab and dry lab) finely come together in solving some of the greatest problems. Anyway, I will continue to update here more on my day-to-day lab rat adventures. I am excited to start this new chapter of my life and see where it takes me! :)
[TL;DR – Not really. The brain is a complicated organ and neural networks is simply a tool for computer scientists and mathematicians to understand how the world works. It has very little or nothing to do with elucidating the functioning of the brain!]
It is no doubt that the human brain fascinates me. Being one of the most complex organs of the human body, the brain can unveil the intricate details of our existence. Weighing just around 1.5 kilograms, our brain is made up of a 100 billion neurons. Each neuron makes around 7000 connections with other neurons, creating 100-300 trillion synapses! It is therefore a no-brainer (pun-intended) that some scientists have spent their entire lives trying to understand how the brain really functions. How is information processed and transferred through the neurons? How are the electrical and chemical signals transformed to an actionable output? How are decisions made? How are these decisions influenced over time and space? How does neuroplasticity influence decision-making, cognition, memory and behavioral patterns from a long-term perspective? These are just some of the tip-on-the-iceberg questions that still remain to be a huge mystery.
Computer scientists and mathematicians have long been intrigued by the human brain. The first mathematical model of the artificial neural network was developed by Warren S. McCulloch (neurophysiologist and cybernetician) and Walter Pitts (logician) in 1943 in their paper, “A Logical Calculus of Ideas Immanent in Nervous Activity“. Here, they describe the concept of a neuron – a single cell living in a network of cells that receives inputs, processes those inputs and generates an output. Mind you – their work was not meant to decode the working of the brain. Instead, it was argued that neural networks can be used as models based on the brain that can be applied in problem solving. Some common uses of artificial neural networks that are in use today are in pattern recognition (facial recognition, character recognition, etc), time series predictions (stock markets, weather forecasts, etc), signal processing (audio, video), control (self-driving cars) and sensors (thermometers, barometers, air quality, density, etc). Like the brain, neural networks do not follow a linear path. Instead, information is processed collectively, in parallel, through a network of nodes (i.e., neurons). Let us consider human vision for example. In each hemisphere of our brain, we have a primary neuron cortex (V1) consisting of 140 million neurons, with thousands of connections between them. Yet, our ability to visualize does not depend only on V1, but additional cortices such as V2, V3, V4, and V5 are also involved in a complex image processing.
The simplest neural network was invented in the 1950s by Frank Rosenblatt at the Cornell Aeronautical Laboratory. The perceptron is a computational model of a single neuron – it takes in one or more inputs, processes the inputs, and gives a single output. The inputs and output are in binary forms and an output=1 “fires the neuron”.
Obviously, the perceptron is not the correct way to represent the brain’s decision making process. A more realistic form of a neural network is when multiple nodes interact with many other nodes by taking in several inputs over several layers. It is more like a network of several perceptrons. Another type of neuron – a sigmoid neuron is a better model used that doesn’t take a binary value, instead, takes any value between 0 and 1 and outputs a real number between these values. Perhaps the most appealing aspect of artificial neural networks is its ability to learn, through feedback loops with powerful learning algorithms. That being said, our basic understanding of the real neural network is still limited and this makes the modeling of artificial neural networks insufficient to understand the complexity of the brain.
Even though strides have been made in areas of computer science and mathematics to understand global challenges and trends using artificial neural networks, we are still miles behind when it comes to using them as a model to decode the brain itself. In short, WE STILL DON’T KNOW HOW THE BRAIN WORKS. As a biologist, a more significant question to me is, “what can we learn about the human brain using artificial neural networks?” Very little progress has been made in understanding how the brain processes information, learns, makes decisions, or works with large amounts of data. The way biologists and neuroscientists think and solve problems is very different from the way mathematicians and computer scientists approach the same problems. While we know a great deal about the heart, lungs, liver, etc, very little is known about the brain and that’s troubling. The following snippet from an interview with the Machine-Learning Maestro Michael Jordan (on the Delusions of Big Data and Other Huge Engineering Efforts) is some good food for thought –
Michael Jordan: It’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.
IEEE Spectrum: Another point you’ve made regarding the failure of neural realism is that there is nothing very neural about neural networks.
Michael Jordan: There are no spikes in deep-learning systems. There are no dendrites. And they have bidirectional signals that the brain doesn’t have. We don’t know how neurons learn. Is it actually just a small change in the synaptic weight that’s responsible for learning? That’s what these artificial neural networks are doing. In the brain, we have precious little idea how learning is actually taking place.
The thing with first-year rotations in a Ph.D. program is that anxiety starts kicking in somewhere along the way when you consciously identify the lab that you want to join and want to get started right away. Having realized that this is going to be a long journey and rushing into things may not help, I am now gaining patience and perspective, and hope to make the most of the remaining time of my first year.
Rotations are a great way to learn about a lab and get involved in the nitty-gritty of research. I was warned at the beginning by a few seniors that I would either love a lab or reject it within the first few weeks of the rotation. Mind you – this has nothing to do with the science pursued in the lab (one wouldn’t decide to rotate in a lab if they didn’t find the research interesting in the first place). This is more about getting comfortable with the way a lab functions and deciding if the environment is a good fit for you. An eight-week lab rotation is really like an eight-week long interview with a potential PI and the lab! It is essential to identify the kind of relationship you foresee having with your advisor for the next couple of years (and beyond). This is perhaps one of the most important aspects of a rotation for me, next to the research work. A good mentor-mentee relationship can go a long way and can be extremely beneficial to one’s academic/professional career. I prefer having an open channel of communication with my mentor and learn as much as possible from him/her.
Not all graduate programs require laboratory rotations. Many departments or programs accept or reject students simply based on their application and/or an interview. In the UK for example, students are recruited to work on specific projects and grants as a part of their Ph.D. for the time period of around 3 years. This may not benefit the candidates who wish to propose their own ideas and develop their own thesis based on their individual research interests. In the US, for most graduate programs in the life sciences (mainly biology and chemistry), the average time for graduation is around 5-6 years. I believe that the freedom and independence of this system trump the short graduation time of the other systems. Although I am certain that both sides have their set of merits and demerits, at the end of the day, the journey is unique to each one of us and what we make of the experience matters the most.