Virtual

Oral Cancer is notorious for its effect on quality of life (QOL) after treatment (Gao et al. 2012). Because of the complex structures and systems involved in oral and oropharyngeal functions, it’s not possible for a surgeon to predict the exact functional consequences of a treatment. In this video we present a user-friendly tool for the surgeon to simulate surgery with primary closure, including sutures and fibrosis, on a FE model of the tongue and base of the tongue. Three different patient cases showed that the biomechanical model was able to give a qualitative indication about the postoperative impairment for five manoeuvres of the tongue.

Segmentation of the tongue in a midsaggital slice in cine mri scans

Segmentation is based on Active Shape Modelling. Selected points on the contour are statistically described by a point distribution model using principal component analysis. The graylevel profiles on lines orthogonal to the contour are also modelled with principal component analysis. The contour points are adapted accordingly using a number of iteration at multiple resolutions of the image.

Diffusion tensor imaging (DTI) 3D fibre tractography for musculature imaging

DTI is a sophisticated MRI technique, which measures the self-diffusion of water in tissues. Water diffusion in living tissues is affected by the presence of cellular membranes, organelles and other physical barriers. DTI enables to characterize muscle fibres using a tractography algorithm based on the local diffusion anisotropy, as well as to provide insights in their local histological status. This technique is currently available at the Radiology Department of the AMC, but needs further development to get accuracy that is sufficient for patient specific modelling.

biomechanical modeling using finite element models

ArtiSynth is a 3D biomechanical simulation platform being developed at the University of British Columbia, Canada. It provides an open-source, cross-platform environment in which researchers can create and interconnect various kinds of dynamic and parametric models to form complete integrated models of anatomical structures. Models can be built from a rich set of components, including particles, rigid bodies, finite elements with both linear and nonlinear materials, point-to-point muscles, and various bilateral and unilateral constraints including contact. A graphical interface allows interactive component navigation, model editing, and simulation control. Existing models include jaw, hyoid, and tongue structures, and these are being extended to include the soft palate, larynx, and pharyngeal wall. ArtiSynth is the simulation platform for the OPAL project, and has also been used to create airway models for use in articulatory speech synthesis. More examples can be found at www.artisynth.org.

Estimation of the muscle activation signals of a tongue

Input of this movie are the 3D positions of optical markers that are attached to the tongue. The 3D positions are obtained from a two-view vision system. A biomechanical finite element model of the tongue is available. This model is activated by a model consisting of 20 muscles. Optimization software is used to find activation signals such that the real 3D positions of the markers match the 3D positions of the model.

The found solution is not patient specific as (i) the biomechanical model is not patient specific (the shape of the tongue, as well as the musculature, is generic); and (ii) the solution is not unique (the various muscles can compensate each other). Additional technology, such as sEMG on the tongue surface, is needed to find true patient specific activities.

Estimation of lip shapes from 16 facial surface EMG signals

The estimator, a generalized regression neural network, has been trained by a learning set consisting of four repetitions of a sequence of lip motions. This sequence consists of various visemes and facial expressions. The movie shows the input (features of the sEMGs) and output (lip shapes) of the neural network and compares this with the shapes as tracked by a triple view vision system. These features are derived from the sEMG signals of a fifth repetition of the sequence.

Articulatory speech synthesis and visualization

VocalTractLab stands for "Vocal Tract Laboratory" and is an interactive multimedial software tool to simulate the mechanism of speech production. The central element of VocalTractLab is a 3D model of the human vocal tract. This model represents the surfaces of the articulators and the vocal tract walls. The shape and/or position of the articulators is defined by a number of vocal tract parameters. Though not implemented as yet, in principle this enables the simulation of patient specific speech. More examples can be found at www.vocaltractlab.de.