Jennifer "Ferby" Cremer

The Human:

I am a PhD candidate in Computer Science - Graphics and Visualization at the University of Florida. I work in SurfLab under the supervision of Dr.Jorg Peters. My work focuses on human perception of complex 3D data (specifically medical imagery) in virtual reality, to both enhance spatial reasoning about the given structures and provide fast yet accurate representations without specialized software training.

For this multi-disciplinary project Dr.Eric Ragan is the co-chair for my committee and key advisor for the user studies of the project.

I have also been a part of the Academy Software Foundation's (ASWF) D&I Working Group since 2020; particularly involved with the Summer Learning Program.

I spent the first few years of graduate school experimenting with teaching. I had the opportunity to teach undergraduate electives -- Design Patterns for Object-oriented Programming, Performant Programming in Python -- and work under the direction of Dr. Jeremiah Blanchard as the Lead TA for Operating Systems.

Since being awarded the Research in Robotic Technology Grant - Research Foundation of the ASCRS in the Summer of 2021, I was able to step back from teaching for a while to solely focus on my thesis project and get back into some of my rejuvenating hobbies like wildlife photography, painting, and go for adventures. Check out the Hobbies page to see more. I often reflect on the interfaces I encounter in my artistic activities when designing feature sets or communicating ideas to my advisors.

Publications


Google Scholar

[Conference Paper]

Cremer, J., Lausch, C., Peters, J., and Ragan, E. (accepted, 2025). Empirical Study of Virtual Reality and Desktop Systems for Qualitative Editing of 3D Meshes: Impacts of Expertise and Context ACM Symposium on Virtual Reality Software and Technology (VRST). p 1-10.

For editing 3D spatial data, 3D interaction through virtual reality (VR) can be a viable alternative to 2D interfaces: research indicates that model editing in VR provides a more enjoyable experience and is fast to learn. These advantages make VR an appealing option for training new users’ spatial understanding before transitioning to standard 2D tools, like Blender and Maya. But how much does model editing in VR benefit the trained user? Our experiment compares the modeling accuracy of non-modelers, casual users, and formally trained artists for objects of varying complexity in desktop and VR. For users with no prior modeling experience, the study found significant improvements in qualitative accuracy and efficiency of aesthetic edits using VR. Importantly, improvements decreased with higher user experience and varied with types of editing for different surface features. The findings suggest that adaptation of traditional desktop modeling tools to VR should be situational decisions based on specific modeling scenarios.

[Poster]

Cremer, J., Lausch, C., Peters, J., Terracina, K., and Ragan, E. Improving Radiology Communications and Patient Trust with Virtual Reality. ACM Symposium on Virtual Reality Software and Technology (VRST). Poster extended abstract. p 1-2.

Interpreting 3D information based on 2D slices of the data is notoriously difficult. Yet in the medical field makes intervention choices based on radiological scans on a daily basis. Volumetric reconstructions of these data sets, via voxel clouds and surface meshes can reduce the cognitive complexity of this task. However, creating these reconstructions requires extensive software training and time, often on the part of a technician separate from the treatment team. To shorten this process, we present a system designed for radiologists and surgeons on the care team. We integrate tools familiar to these experts and provide a stereoscopic environment for quick spatial comprehension and intuitive data curation. Based on the feedback from oncology collaborators on the system in its current state, it is not only helpful for communicating tumor progression but, additionally shows promise as a teaching tool to assist with building skills for traditional interpretation of radiology images.

[Poster]

Benda, B., Cremer, J., Fang-Wu, J., and Ragan, E. Detection of Translation Gain is Decreased When Virtual Reality Users Are Unaware of Its Presence. ACM Symposium on Virtual Reality Software and Technology (VRST). Poster extended abstract. p 1-2.

The prevalent evaluation methods used to estimate detection of redirected walking are based on methods from psychophysics that require users to know their virtual movements are being manipulated. However, this higher-than-normal level of attention toward their movements yields conservative detection thresholds. We find that participants who were unaware that redirected walking (translation gain) was applied detected the technique at a significantly higher gain than users who were aware (at gains of 1.73 and 1.38, respectively). We provide evidence that redirected walking-based navigation solutions may be able to leverage gain values that are larger than the current threshold guidelines would suggest.

[Doctoral Consortium]

Jennifer Cieliesz Cremer. Scan2Twin: Virtual Reality for Enhanced Anatomical Investigation. IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2024) Doctoral Consortium. (Link)

Interpreting 3D information from 2D slices of data points is a notoriously difficult process, especially in the medical field with its use of scan imaging. This dissertation aims to apply and assess the advantages of virtual reality (VR) for exploring volumetric visualization of medical images and enabling the efficient generation of explicit, unambiguous surface models. I will evaluate the effects of stereoscopic viewing on different methods of volumetric data visualizations, experiment with tool development to increase 3D modeling accessibility, and an expert review of the overall system design we refer to as Scan2Twin.

Research Resume:



View in Own Window

Connect