Industrial ergonomics is moving away from a reactive approach, in which jobs that cause injuries are modified, to a proactive approach that emphasizes assessing each job for feasibility and safety as the workplace and processes are designed. Some aspects of job design can be reduced to a checklist or a set of numerical criteria: maximum weight to be lifted or a maximum part insertion force. But when the potential problem includes an awkward posture or a difficult reach, a more complex analysis is necessary. Software representations of humans, known as digital human models (DHM), are becoming widely used to perform the analyses for these complex situations. The figure shows the Jack human figure model from UGS-PLM Solutions being used to simulate a sheet-metal assembly task. The figure motions were generated by the HUMOSIM Ergonomics Framework, described below.
The largest area of application for digital human models is in vehicle design. Expensive physical mockups are being replaced with virtual prototypes that are assessed using virtual drivers, passengers, and maintainers. Many of the original human modeling tools, dating from the 1970s, were developed by the U.S. military and its contractors for cockpit design. More recently, auto, truck, and off-road equipment manufacturers have made extensive use of human models.
I've conducted DHM-related research since the early 1990s, originally focusing on vehicle design applications but now also addressing industrial ergonomics. The sections below highlight some current and recent work. For more information, see the publications page or contact me.
In 1997, Professors Don Chaffin and Julian Faraway founded a laboratory at the University of Michigan to study task-oriented human motions and to develop algorithms to predict postures and motions for ergonomics analysis with digital human models. I started working with the HUMOSIM lab in 2000 and took over from Don Chaffin as Director in 2006. HUMOSIM is supported by an industry affiliation program, with current partners Ford, General Motors, International Truck and Engine, Toyota, and the U.S. Army. A substantial amount of our support comes from the Automotive Research Center at the University of Michigan. Our Technology Partner is UGS-PLM Solutions, developers of the Jack human modeling software. We welcome new industry partners -- contact me for more information.
The HUMOSIM Laboratory takes a strongly empirical approach to the study of human motion. We've found that many preconceptions about how people move are wrong. Indeed, many of the conclusions in the human movement literature are valid only in very narrow contexts. As one example, hand trajectories are routinely reported to be linear in Cartesian space, but even a casual examination of normal human movements shows that hand trajectories for most tasks are obviously nonlinear. We conduct detailed laboratory experiments in which participants perform generic tasks, such as moving objects from place to place, that are integral to the types of task-oriented analyses ergonomists perform with digital human models. Based on a detailed examination of the data, we develop efficient and robust modeling structures, then fit the parameter values from the data. We have conducted several field studies of industrial workers to validate important components of the models.
Unlike many human modeling researchers, we aren't trying to sell software. Instead, we've taken on the challenging task of creating motion simulation algorithms and ergonomic analysis tools that will work with any human figure model, including the ones already in widespread use. We have developed four discrete approaches to human motion simulation, all of which have been successful to some degree. Details of all four approaches are at humosim.org, but I'll focus here on the one that I have developed, called the HUMOSIM Ergonomics Framework.
The basic idea behind the framework is that implementation of knowledge concerning human motion will be most efficient if the knowledge is expressed in terms of algorithms that can be written on paper, rather than as source code or executable software libraries. Moreover, a modular approach that emphasizes self-contained but interconnected modules will be more effective than an all-in-one approach. Our progress to date has validated that approach. My role has been to develop the Framework concept itself and to design the Framework structure itself while Ph.D. students contribute modules developed as part of their dissertation research.
A key idea behind the Framework is that intercommunicating modules can produce accurate, coordinated whole-body motion. But to demonstrate that this is feasible, we need an implementation of the whole Framework. We have chosen to code this reference implementation in the Jack human modeling environment. For that reason, the human modeling images on these pages show predominantly Jack figures. However, we are using only the forward kinematics and introspection capabilities of Jack. We're not using any of Jack's own motion or behavior algorithms. This allows us to have confidence that our algorithms can work in any human figure model. General Motors has had success implementing several Framework modules in the Delmia figure model, validating this concept.
We've conducted a half-dozen studies of seated reach, including testing with push-button reaching tasks, object transfers, and push-pull tasks with awkward hand postures. I've led several projects on seated reach, including one focused on torso kinematics. We've found that torso motions in seated reach typically involve more complex pelvis and lumbar spine motions than standing reaches, probably because the kinematic constraint imposed by the seat necessitates greater use of the available degrees of freedom to extend hand reach. Standing operators can often move their feet to position the base of the torso more optimally, although obstacles in the environment can lead to bending and twisting of the torso more typical of seated reaches.
The animation shows a simulation of a vehicle driver performing a variety of reaching tasks. To generate this simulation, the HUMOSIM Framework is passed a sequence of gaze and reach targets. The Framework coordinates the motions of the upper extremities, head/neck, torso, and lower extremities to perform the tasks. This simulation is based in part on the work of current and former Ph.D. students in the HUMOSIM lab, including Kyunghan Kim, Joshua Danker, Matt Parkinson, Su Bang Choe, and Jing Wang, and on statistical analyses of movement data by Prof. Julian Faraway.
Manual materials handling tasks are among the jobs most commonly analyzed with human figure models. Often the pickup and place events produce the greatest loading on the operator, so accurate prediction of these transitions is critical for ergonomic analysis. The HUMOSIM Framework generates whole-body motions based on the high-level task instructions, such as "move object A to location B." One component of the Framework, known as the transition stepping and timing (Transit) model, predicts foot placements and timing (stepping) as a function of task and operator characteristics. Ph.D. Candidate David Wagner is developing the Transit model as part of his dissertation research. This QuickTime movie shows simulated object transfers. The figure to the right shows a simulated industrial assembly task. In both cases, the only inputs to the Framework are the object locations. The stepping, reach, and gaze motions are planned by the algorithms.
For many current human modeling applications in ergonomics, the tasks are well-defined and known in advance, which facilitates a scripting approach to specifying figure tasks. However, more complicated simulations are possible if the figure's behavior can be generated based on the combination of high-level goals and emergent environmental stimuli. My colleague Omer Tsimhoni and I are developing an integrated cognitive/physical model of a driver that connects the Queuing Network -- Model Human Processor (QN-MHP) with the HUMOSIM Framework. The QN-MHP, developed at the University of Michigan by Yili Liu and his students, simulates perception, cognition, and motor action using a queuing network model of human information processing. Dr. Tsimhoni has previously connected the QN-MHP with the UMTRI Driving Simulator to create an integrated simulation of human lane-keeping and map-reading. In our current work, we will link the QN-MHP, HUMOSIM Framework, UMTRI Driving Simulator, and the U.S. Army's IMPRINT task-analysis tool to simulate driving with secondary tasks, in particular interactions with in-vehicle displays and controls. This movie shows the HUMOSIM Framework implementation in Jack used to visualize the output of the QN-MHP for a sequence of driving tasks.
©2013 Matthew P. Reed and The University of Michigan