Реферат на тему Discuss The Problems Involved In Analysing The
Работа добавлена на сайт bukvasha.net: 2015-06-19Поможем написать учебную работу
Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.
Discuss The Problems Involved In Analysing The Auditory Environment And Describe How The Human Audit Essay, Research Paper
This essay will examine the human auditory environment, highlighting some problems of analysing it and the way our auditory system overcomes them. Since it is such a broad subject, the essay will largely confine itself to examining localisation of sounds, and that most human of traits, speech perception. Sound consists of variations in pressure as a function in time. As a wave, it possesses pitch (frequency), volume and harmonics, or timbre, and a complex sound consists of more than one tone – such as the human voice. Its variations can be broken down into sinusoidal frequency components, and subject to Fourier analysis – which is the task of the ear. Sound waves enter the outer ear – the pinna, and travel through the auditory canal to the eardrum which they cause to vibrate. These vibrations are transmitted through the inner ear by three small bones, the malleus, incus and stapes. The middle ear then transfers the sound from air to the fluids in the spiral cochlea. Fluids in the cochlea apply pressure to the basilar membrane, running along the length of the cochlea – which breaks down the variations. The basilar membrane produces distortions which stimulate the hairs in the organ of corti, and these hair cells transduce the mechanical movement into an action potential within the nerve fibres of the auditory nerve and hence to the brain. One must note that the coding of sounds in the ear is well documented, but we still have a long way to go in discovering more about the processing of neural information at higher levels in the auditory system. Space perception is extremely important to humans and animals – the localisation of sound sources to judge the direction and distance of a sound source. We are most able to locate sources in the horizontal dimension, whereas in the vertical and depth dimensions we become progressively less able. Bearing in mind that the cues used depend upon the type of sound and its environment, the most reliable cues used to localise sounds invariably come from a comparison of signals reaching the two ears – binaural processing. Here, a sound coming from my right would reach my right ear before my left, since it travels slightly further – this time difference is dubbed the interaual phase. Moreover, the sound appears louder to the right ear than the left because of the ’shadow’ cast by my head – the interaural intensity difference. Let us assume a human head to be roughly spherical in shape – from this we can calculate that, if the sphere has a diameter greater than 50% of the wavelength of a sinusoidal tone, it affects the tone by casting a shadow upon it. Binaural processing may additionally increase our ability to accurately detect signals from a noisy background – witness the famous ‘cocktail party’ effect of picking up words despite a chattering crowd, which is a computational nightmare. This warrants a look at human speech, a very particular sound form which is difficult to analyse if only because we are still uncertain as to what its basic unit of perception is – syllable or phoneme? There are far fewer phonetic sound patterns than syllabic, although rapid speech seems to demand a longer segment for accurate auditory perception – the syllable. There is also some difficulty in identifying all the linguistic units in rapid speech – especially with regard to consonants. Often the correct identification of a phoneme only occurs using information picked up from other syllables in the utterance, or even entire words. (ie, if somebody has difficulty in pronouncing certain consonants) Speech certainly seems to be worthy of special consideration – evidence indicates that certain parts of the brain are specialised for dealing with speech – and is inextricable from other knowledge such as syntax, semantics, familiarity with the speaker, etc. The above ‘cocktail party’ effect highlights just how efficient our analysis of speech is. The aforementioned auditory environment has an obvious role to play in our ability to accurately detect sounds. Echoes in particular can be produced by bouncing off various surfaces in the room. Ordinarily we fail to notice this, and can thus accurately locate the sound source even in a reverberant room. However, if a sound is recorded and played backwards, the echoes suddenly become apparent. The precedence effect allows us to bypass any misleading echoes, by presenting them to us as part of the principle sound source. This forces them to have only a very small effect upon localisation, as long as they conform to certain conditions – which Wallach et al (1949) calculated. A new generation of loudspeakers have emerged to exploit such echoes – ’surround-sound’ speakers, which mimic the effect of being in a concert hall. Even with a good speaker (ranging from 50 to 15,000 Hz ± 5dB), the positioning in relation to walls and objects in a room is vital – hence the market for expensive speaker stands. Obviously there is some information here -and once again it is difficult to model on a computer. Schroeder (1974) actually examined was is generally believed to be the optimal auditory environment – classical concert halls. His findings shed light on how the environment affects the actual ‘quality’ of the sound as well as the localisation. One factor which was disliked in some halls was the interaural coherence. This is a measure of the correlation of the signals at the two ears – listeners were found to prefer halls with a low interaural coherence, keeping the signals at the two ears relatively independent of each other, creating a feeling, as Schroeder suggested, of being ‘immersed’ in the sound. A more recent discovery is that by Kemp (1978) – using a low level click applied to the ear, and with a microphone attached to the ear canal, he detected the sound being reflected from it. Some sounds returned with a delay of 60 ms – much too long to be attributed to middle ear activity. This has become known as the cochlean echo and is relevant in that it is only present in ‘healthy’ ears – undamaged by loud sounds or hereditary traits, which tend to suffer from tinnitus. It’s presence is therefore desirable if we want to be at our maximum capacity for analysing our auditory environment. Although very useful for left/right localising, there are mistakes made on the front/back plane. An easy method of combatting this is simply to move one’s head until one is in a position to employ the above left/right localising. Interestingly, the pinnae can cause slight differences in the physical properties of sounds from in front and behind, and is thus useful in front/back localisation. Similar strategies can be used to resolve sound locations in the vertical dimension by tilting the head and by the asymmetrical nature of the pinnae on all planes. Important is an appeal to both visual stimuli and to world knowledge – both of which resolve some ambiguities and aid three dimensional location. The former is simply demonstrated by observing a television set – where the loudspeaker is generally located to one side of the screen, yet the sound does not seem to be detached from the images on screen. (See findings by Weerts & Thurlow (1971) and Wallach (1940) for a demonstration that our perceived visual orientation has a strong effect upon our auditory spatial awareness) The latter brings up the larger point of inference – certainly an appeal to world knowledge can help locate a sound source. If I am indoors and hear the sound of a Beatles song, I infer that it is emanating from my expensive hi-fi (featuring ’surround sound’ speakers, naturally), and not that the Beatles have come to visit me. There is also ambiguity arising from distance of the sound source – quite apart from the precedence effect, a familiarity with the sound can help matters quite dramatically. Inferences from world knowledge, however, are far from failsafe. Sounds can enrich one’s life – as anybody familiar with Beethoven’s choral finale will testify, or perhaps even enlighten it – in Africa, tribal shamen induce deep meditative trances by rhythmically beating on drums. This essay has only highlighted some of the problems of our auditory environment and the ways of overcoming them – it ought to have demonstrated that the ear truly is a fascinating and complex instrument, and one to be cherished. BIBLIOGRAPHY Eysenck, M. & Keane, M. -Cognitive Psychology. (1989) Lawrence Erlbaum Associates. Kemp, D.T. – ‘Stimulated acoustic emissions from within the human auditory system’ in J.Acoust.Soc.Am, vol 64. (1978) Moore, B. – An Introduction to the Psychology of Hearing. (1982) Academic Press. Schroeder, M.R., ‘Comparative study of European concert halls: correlation of subjective Gottlob, D. & – preference with geometric and acoustic parameters’ in J.Acous.Soc.Am Siebrasse, K.F. (1974), vol 56. Wallach, H., Newman, E.B., & – ‘The precedence effect in sound localisation’ Am.J.Psychol vol 27 (1949) Rosenzweig, M.R.