User:Chrono1200sphere/sandbox/Acoustic camera

An acoustic camera is an imaging device used for illustrating both the source and intensity of sounds. It uses an array of microphones (a group of microphones) along with signal and image processing software and hardware. In 1999, at the Hannover Messe (an industrial fair in Germany), the German company GFaI presented the first product to be marketed as an acoustic camera. The company had combined a microphone array with a digital camera.[1] Current work to improve the camera concentrates on advancing hardware and software in order to decrease camera size and increase camera mobility. [2]

How It Works

edit

The system for using an acoustic camera is usually set up as follows: the acoustic camera (a microphone array, generally with a built-in camera), is connected to a data storage device, a processor, or both (a computer satisfies the "both" requirement). This entire system is required to produce the final resulting image showing sound sources and sound intensities. [1]

The microphone array "listens" to and records an area, creating a separate audio signal stream for each microphone in the array. Since each microphone in the array has a different position, the microphones experience differing delays in receiving sounds. The delay for a microphone is dependent on the distance between the microphone and the sound source. Distances between the microphone and each spacial point can be predetermined and stored in memory, or can be computed on-site.[2]

Each audio signal stream has a delay corresponding to the delay of the microphone that recorded the stream. When the signals are processed, the delays in the audio signals are adjusted to "focus" on one point in space; this is a processing technique called beamforming. The focusing makes it so that all sound waves originating from the focus point are in phase, and thus constructively interfere (strengthen from building on each other). Sound waves that originate from non-focus points are filtered out, since they destructively interfere (weaken from cancelling each other out). [2] The signal processing software averages the constructive signals to calculate the intensity of the sound at the focus point. Processing can either occur on-site and in real time, or can be done off-site at a later time.[1]

The focusing repeats itself for every point in the area, with the ultimate goal of calculating the intensity of sound at every point. The imaging software assigns pixels to represent certain points in space, and the pixels are given a color based on the sound intensities at each point.[2]

Once all of this is finished, the pixel color information is output on top of an image or 3D-model, which is either taken by the camera or obtained ahead of time.[1]

Some acoustic cameras use two-dimensional acoustic mapping. This type of camera uses a unidirectional microphone array (e.g. a rectangle of microphones, all facing the same direction). Two-dimensional acoustic mapping works best when the surface to be examined is planar and the acoustic camera can be set up facing the surface perpendicularly. However, the surfaces of real world objects are not often flat, and it's not always possible to optimally position the acoustic camera.[3]

Additionally, the two-dimensional method of acoustic mapping introduces error into the calculations of the sound intensity at a point. Two-dimensional mapping approximates three-dimensional surfaces into a plane, allowing the distance between each microphone and the focus point to be calculated relatively easily. However, this approximation ignores the distance differences caused by surfaces having different depths at different points. In most applications of the acoustic camera, this error is small enough to be ignored; however, in confined spaces, the error becomes significant.[3]

Three-dimensional acoustic cameras fix the errors of 2D cameras by taking into account surface depths, and therefore correctly measuring the distances between the microphone and each spatial point. These cameras produce a more accurate picture, but require a 3-D model of the object or space being analyzed. Additionally, if the acoustic camera picks up sound from a point in space that is not part of the model, the sound may be mapped to a random space in the model, or the sound may not show up at all. 3D acoustic cameras can also be used to analyze confined spaces, such as room interiors; however, in order to do this, a microphone array that is omnidirectional (e.g. a sphere of microphones, each facing a different direction) is required. This is in addition to the first requirement of having a 3-D model. [3]

Applications

edit

There are many applications of the acoustic camera, with most focusing on noise reduction. The camera is frequently applied to the improvement of vehicles, such as airplanes and trains, and structures, such as wind turbines. Another application is the troubleshooting of machines and mechanical parts. These applications take advantage of the acoustic camera's sound analysis capabilities to provide manufacturers the data they need to improve their products.[2] [4]

Airplanes are one product to which the acoustic camera can be applied. Using an acoustic camera, the levels of noise within a plane during operation (flight and landing) can be seen and measured. This involves setting up and positioning the acoustic camera, along with its power supply, inside the airplane. The acoustic camera also needs to be added to a 3D CAD model of the airplane's interior. After the setup, analysis of both constant noise (e.g. rattling) and noise impulses (e.g. landing impact) can be done. [4]

A similar setup of the acoustic camera can be used to study the noise inside passenger carts during train operation. Alternatively, the camera can be set up outside, in an area near the train tracks, to observe the train as it goes by. This can give another perspective of the noise that might be heard inside the train. Additionally, an outside setup can be used to examine the squealing of train wheels caused by a curve in the tracks. [4]

An acoustic camera can also be used to analyze the noise production of wind turbines. The camera is set up outside, a distance away from the wind turbine, and stays in the same position for a time, recording data about the wind turbine as it operates. The recorded data can then be analyzed to determine which parts of the wind turbine produce the most noise, and what those noises sound like (frequencies of the sounds). [4]

Troubleshooting of faults that occur in machines and mechanical parts can be accomplished with an acoustic camera. To find where the problem lies, the sound mapping of a properly functional machine can be compared to one of a dysfunctional machine. Additionally, the camera can be used for quality control of machines. While quality control experts can often find defective machines because of the irregular noises the machines make, less knowledgeable employees may not be able to do this. With an acoustic camera, non-experts can visually perform the same quality control checks that experts can. [4]

Challenges

edit

The signal processing required by the acoustic camera is very intensive and needs powerful hardware and plenty of memory storage. Because of this, signal processing is frequently done after the recording of data, which can hinder or prevent the use of the camera in analyzing sounds that only occur occasionally or at varying locations. Cameras that do perform signal processing in real time tend to be large and expensive. Hardware and signal processing improvements can help to overcome these difficulties. Signal processing optimizations often focus on reduction of computational complexity, storage requirements, and memory bandwidth (rate of data consumption).[2]

References

edit