Peder Larson

FAU-UCSF Ci^2 Workshop

FAU-UCSF Ci^2 Workshop

The UCSF Center for Intelligent Imaging (Ci2) hosted a 2 day workshop with Friedrich-Alexander Unversitat (Erlangen-Nurnberg) to kickstart collaborative projects between the institutions. Day 1 featured a full day of scientific talks on a broad range of topics, while Day 2 focused on breakout sessions with the goal of generating new collaborative project ideas that can also be put into grant proposals. Overall it was a wonderful workshop. In general, we brought more translational, real-world data driven work, while FAU scientists brought more theoretical and structural viewpoints as to how we perform intelligent imaging. Here are my brief thoughts

Overall representation of Body Imaging Research was good, with lots of MSK work as well as several of our projects leveraging Information Commons, as well as my group’s prostate cancer projects. Even then, there are other relevant projects that were not

In the space of image reconstruction and enhancement, the advancements of AI are now becoming products. These seem to fall into 3 categories of denoising, super-resolution, and, most recently, image reconstruction from raw data. I have no doubt this will both improve image quality and imaging speed. However, there was a cautionary tale shown that the failure mode of these methods when not given enough data is that they can make a very nice picture but it is just the learned typical image characteristics. This can then obscure or eliminate important features, but with no indication (e.g. artifact) that the reconstruction has failed. I believe that proper bounds on the use of these methods can minimize this chance, and that identifying reconstruction failures will be possible with further developments.

Both institutions highlighted projects in the area of joint image reconstruction and image analysis task (e.g. segmentation), where the potential is that both problems can be solved jointly with neural networks and they will, in fact, help each other. It is still very much an emerging area, but likely a great opportunity to collaborate between institutions. Based on knee MRI experience at UCSF, this could also be pretty easily applied to other anatomical targets and potentially imaging modalities.

Information Commons has become what looks like a relatively mature platform for ML resesarch, and numerous projects pulling thousands, tens of thousands, even hundreds of thousands images were highlighted. I would look very closely at using this for new research projects.

There were also a few talks working with pathology and microscopy data. This is extremely related to biomedical imaging, and also to many Body Imaging research projects particularly in oncology. I do not see much use of advanced microscopy/pathology image analysis in Radiology research, which seems like a natural opportunity to both use our skills as well as extract more meaningful correlative information.

In the Keynote talk by Dorin Comaniciu from Siemens Healthineers, we got a view into the significant efforts and progress being made in imaging systems and radiology workflows, but also bigger picture goals of Digital Twins and Virtual Medical Assistants that was a more surreal view into our potential healthcare future. They have invested massive compute and data resources, not only into medical imaging models but also into foundation models that can drive some of their more ambitious efforts. They have clearly been staying on the cutting edge of these technologies that have led to, for example, chatGPT, which are in a period of explosive growth and interest. This talk reminded me of the difficulty I have in finding what I think is the best role for academic resesarch, particularly in these research areas where there is often intense interest from industry as well.

My final thought was to consider to what extent can our algorithm development use common or general knowledge, where the development then focuses on some enrichment step for a particular use case. This was inspired by foundation models - these massive models that enable chatGPT and seemingly impressive capabilities - as well as some anamoly detection work from Bernhard Kainz at FAU who presented work on a “normal” image model. In this work, the effort went into training a model to learn normal, and then use it to pick up general medical image anamolies. To me, this seems like such a powerful, more generalizable paradigm, and maybe work would be to build off this to then do more precise tasks (segmentation, classification).

The future is bright, and we have more great opportunities thanks to these new potential collaborations. Thanks for reading!

comments powered by Disqus