Distill PhD research questions from iterative thinking

4 minute read


Yesterday (June 15th) I had a graduate meeting with my supervisor. She encouraged me to write down my research questions every other day, and distill it from the iterative thinking process. I think this process is quite reasonable. Writing helps me to think thoroughly, and by writing it down from time to time, it keeps haunting me in my dreams and pushes me to a limit where the questions will show up spontaneously.

Now that I have identified a research topic/area (finally): to use machine learning and computer vision for medical imaging analysis. I will need to distill research questions from the topic. My motivation roots from my interested in the human brain and would like to discover and understand how it operates, how we conceive, understand, think and reason. I chose neurology in medical school largely because of this, although it relates more to psychology or neuroscience. Then when I turn into computer science and technology realm, I realize that machine learning and AI somewhat can help me to pursue my intellectual curiosity on how brain works. Computational neuroscience, (or a branch in AI that build computational models to study brain) is an option, and the most obvious one. But IMHO, it will require solid understanding and technique skills in machine learning, mathematics and modeling at first. I would like to collaborate with other computational neuroscientists at a later stage of my PhD research, or postdoc, perhaps on pain or some mental disorders. But for my PhD thesis, I would like to relate more to the application level, rather than the basic science level. So why computer vision? At first, I was interested in using CV for general human activity analysis, such as gait/fall detection. But it seems most of my PhD work will be on data acquisition. I read several reviews on deep learning in medicine, and the medical data can be categorized into EHR, imaging, sensor, genomic. Among them, the medical imaging directly connects to CV, and I can go back to neurology in a certain way!

Here is a summary of my previous research pathway:

Medicine --> Neurology

--> Computer Science

  --> Human-Computer Interaction --> Virtual Reality

  --> Machine Learning --> Computer Vision --> Medical Imaging Analysis --> ?

The question mark at the end will be solved (hopefully) after my three-month research question marathon.

[June 16]

  • How to make the prediction from machine learning more interpretable by doctors?

Interpretability is the recent hot topic in machine learning community. I saw several works on that: Explain yourself, machine. Producing simple text descriptions for AI interpretability and The Building Blocks of Interpretability

  • [placeholder] some questions related to neurology, aging, and medical imaging analysis.

  • Human-Computer Interaction & AI

Inspired by this great paper: Using Artificial Intelligence to Augment Human Intelligence and Michael Jordan’s article on Intelligence Augmentation(IA)

[June 29]

Last week when I was at Banff for AGE-WELL Summer Institute, I came across another research conference on neuroscience and dementia. I sneak into the poster room and was so fascinated by the brain imaging. Yes! Just at that moment, I re-found the area I will invest my intelligence and time in: neuroimaging analysis with machine learning; neuro computing/informatics.

Medicine --> Neurology

--> Computer Science

  --> Human-Computer Interaction --> Virtual Reality

  --> Machine Learning --> Computer Vision --> Medical Imaging Analysis --> Neuroimaging Analysis

Seems so obvious when I connected the dots ;)

[August 8]

The model interpretation is a big question worth to investigate. For a clinician, s/he may not care what fancy techniques or architecture the model uses. S/he cares most is that what will I benefit from the tool? How reliable is the tool? How can I tell if the tool is reliable or not? For example, the report the radiologists give to the clinician will not only state the findings, but also list key observations to support such findings. What the model really knows and how to convey the information to doctors of what the machine knows are important topics in medical imaging analysis with deep learning. In this research area, model visualization techniques and human-computer interaction will be related topics and tools that arm with the modern deep learning technology to reveal the internal knowledge of the model explicitly to human doctors.

Leave a Comment