Difference between revisions of "Recording Technology"

(Die Seite wurde neu angelegt: „ == Der Faktor Mensch == Erfahrungen im Studiobereich zeigen: 1. Tonschaffende wissen oft nicht, was die Plugins mit den Signalen anstellen, obwohl sie in de…“)
 
 
Line 1: Line 1:
 +
{| class="wikitable" border="1"
 +
|-
 +
| [[File:Blacky-2iak20.jpg]]<br />
 +
''[[Myro Black Pearl]]'''
 +
|
 +
=== The problems of the recording side ===
 +
This is where the whole dilemma of multi-microphone recordings becomes apparent: delay problems, comb filter effects, intensity differences, etc. Depending on the wavelength of the sound waves and depending on travel time differences, completely chaotic superpositions, additions and subtractions of sound waves occur. The result is an artificial product. Above a certain difference in transit time and the associated large difference in level, the superpositions of sound components become less problematic and are more likely to be perceived by the ear as spatial sound. But here we are talking about many meters of distance difference. However, the transients are regularly distorted, e.g. in drum recordings, and the original impulse dynamics are weakened. The described applies in principle to all recordings with more than one microphone. When playing back via loudspeakers, this problem occurs twice.<br />
  
 +
1. stereophony: even if the listener is sitting exactly in the middle (millimetre exact same distance to the LS), his two ears are not. So, for example, if we have the main performers presented equally loud through both speakers so that they are imaged in the center, then each of our ears perceives the singer twice, in quick succession. The two sound events are close in time to each other and are thus perceived as one related event. So far so good, we hear one singer and not two. However, the two sound events arriving at each ear with a time delay form a superimposed and thus new artificial sound mixture. Regardless of how our ear perceives short time shifted sound waves, this perception will be different from the original sound structure.
  
== Der Faktor Mensch ==
+
2. the loudspeakers themselves: If drivers within a loudspeaker are reversed in polarity or the phase shifts within the transmission path, then we also get an artificial sound mixture as output. In addition, there is the radiation problem of every concept, regardless of whether it is a "one-way" or a "multi-way" loudspeaker. <br />
Erfahrungen im Studiobereich zeigen:
 
  
1. Tonschaffende wissen oft nicht, was die Plugins mit den Signalen anstellen, obwohl sie in der Lage sind, mit ihrer Technik ein Vorher-Nachher zu analysieren. Es tut nur niemand. Kompressoren zum Beispiel komprimieren nicht nur, sie verbiegen massiv die Signalformen, sogar die spektrale Zusammensetzung der Signale.<br />
+
The design of a transmission path for sound events is therefore extremely complex and subject to compromise. Mono miking and mono reproduction with a signal / time converting loudspeaker offer the closest approximation to the original. The microphone must be positioned at a sufficiently large distance from the sound sources so as not to favour certain sound components of the instruments. Of course, all other components of the transmission path must not make any signal / time errors.
  
2. Tonschaffende hören über ihr Equipment viele Phänomene nicht, z.B. die Einflüsse von Filtern bei der DSD-Konvertierung oder die Klangunterschiede von D/A-Wandlern. Folge: Sie verwenden einfach die Grundeinstellungen, also die Filter, die beim Einschalten aktiviert werden. Auch den Einfluss unterschiedlicher Clocks hören sie über ihr Equipment nicht, da die räumlichen Verbiegungen und andere Artefakte nicht als solche erkannt werden im allgemeinen Nebel der Abhörmonitore. <br />
 
  
3. Tonschaffende brauchen einen großen Teil ihrer Zeit und Aufmerksamkeit für die Bedienung der ausufernden Digitalwerkzeuge. Das geht einher mit einer Bewusstseinsbildung dahingehend, dass man mit den digitalen Werkzeugen alles hinbiegen kann. Also einfach 7 bis 8 Mikrofone in einen Flügel stopfen und hinterher den ganzen Mischmasch irgendwie zu einem gefälligen Klangbrei hinbiegen . . . und das ebenfalls ohne wirklich zu hören, was man da eigentlich tut.
+
|}
  
4. Die akustischen Bedingungen in den sogenannten "Studios" und die oft kommerzielle Abhängigkeit von Tonstudioausstattern, bzw. von der Verwaltung, tragen auch nicht zu besten Arbeitsbedingungen bei. Außerdem traut sich kaum ein Toningenieur, aus dem sehr konservativen Rahmen auszubrechen und Neues auszuprobieren und zu integrieren. Alles, was nicht der einmal festgesetzten "Norm" entspricht, ist verpönt.
+
=== The human factor ===
 +
Experiences in the studio area show:
  
Zusammengefasst ist es doch ernüchternd. Aber es ein paar Ausnahmen, Tonmeister, die diesen Titel zu recht tragen und sehr schöne Klangbilder / -erlebnisse produzieren.  
+
1. sound creators often don't know what the plugins do to the signals, even though they are able to analyze a before and after with their technique. It's just that no one does. Compressors, for example, don't just compress, they massively bend the waveforms, even the spectral composition of the signals.<br />
 +
 
 +
2. sound producers do not hear many phenomena through their equipment, e.g. the influence of filters during DSD conversion or the sound differences of D/A converters. As a result, they simply use the default settings, i.e. the filters that are activated when they are switched on. They also don't hear the influence of different clocks through their equipment, because the spatial distortions and other artifacts are not recognized as such in the general fog of the monitoring monitors. <br />
 +
 
 +
3. sound creators need a large part of their time and attention for the operation of the sprawling digital tools. This goes hand in hand with an awareness that digital tools can be used to bend anything. So just cram 7 to 8 microphones into a grand piano and then somehow bend the whole mishmash into a pleasing sound mush . . . and that too without really hearing what you're actually doing.
 +
 
 +
4. the acoustic conditions in the so-called "studios" and the often commercial dependence on recording studio equipment suppliers, or on the administration, do not contribute to the best working conditions either. In addition, hardly any sound engineer dares to break out of the very conservative framework and try out and integrate new things. Anything that does not conform to the "norm" once it has been established is frowned upon.
 +
 
 +
In summary, it is sobering after all. But there are a few exceptions, sound engineers who rightly bear this title and produce very beautiful sound images / experiences.  
  
 
<''zurück: [[Myroklopädie]]''><br />
 
<''zurück: [[Myroklopädie]]''><br />
 
<''zurück: [[Myro]]''>
 
<''zurück: [[Myro]]''>

Latest revision as of 13:57, 27 July 2018

Blacky-2iak20.jpg

Myro Black Pearl'

The problems of the recording side[edit]

This is where the whole dilemma of multi-microphone recordings becomes apparent: delay problems, comb filter effects, intensity differences, etc. Depending on the wavelength of the sound waves and depending on travel time differences, completely chaotic superpositions, additions and subtractions of sound waves occur. The result is an artificial product. Above a certain difference in transit time and the associated large difference in level, the superpositions of sound components become less problematic and are more likely to be perceived by the ear as spatial sound. But here we are talking about many meters of distance difference. However, the transients are regularly distorted, e.g. in drum recordings, and the original impulse dynamics are weakened. The described applies in principle to all recordings with more than one microphone. When playing back via loudspeakers, this problem occurs twice.

1. stereophony: even if the listener is sitting exactly in the middle (millimetre exact same distance to the LS), his two ears are not. So, for example, if we have the main performers presented equally loud through both speakers so that they are imaged in the center, then each of our ears perceives the singer twice, in quick succession. The two sound events are close in time to each other and are thus perceived as one related event. So far so good, we hear one singer and not two. However, the two sound events arriving at each ear with a time delay form a superimposed and thus new artificial sound mixture. Regardless of how our ear perceives short time shifted sound waves, this perception will be different from the original sound structure.

2. the loudspeakers themselves: If drivers within a loudspeaker are reversed in polarity or the phase shifts within the transmission path, then we also get an artificial sound mixture as output. In addition, there is the radiation problem of every concept, regardless of whether it is a "one-way" or a "multi-way" loudspeaker.

The design of a transmission path for sound events is therefore extremely complex and subject to compromise. Mono miking and mono reproduction with a signal / time converting loudspeaker offer the closest approximation to the original. The microphone must be positioned at a sufficiently large distance from the sound sources so as not to favour certain sound components of the instruments. Of course, all other components of the transmission path must not make any signal / time errors.


The human factor[edit]

Experiences in the studio area show:

1. sound creators often don't know what the plugins do to the signals, even though they are able to analyze a before and after with their technique. It's just that no one does. Compressors, for example, don't just compress, they massively bend the waveforms, even the spectral composition of the signals.

2. sound producers do not hear many phenomena through their equipment, e.g. the influence of filters during DSD conversion or the sound differences of D/A converters. As a result, they simply use the default settings, i.e. the filters that are activated when they are switched on. They also don't hear the influence of different clocks through their equipment, because the spatial distortions and other artifacts are not recognized as such in the general fog of the monitoring monitors.

3. sound creators need a large part of their time and attention for the operation of the sprawling digital tools. This goes hand in hand with an awareness that digital tools can be used to bend anything. So just cram 7 to 8 microphones into a grand piano and then somehow bend the whole mishmash into a pleasing sound mush . . . and that too without really hearing what you're actually doing.

4. the acoustic conditions in the so-called "studios" and the often commercial dependence on recording studio equipment suppliers, or on the administration, do not contribute to the best working conditions either. In addition, hardly any sound engineer dares to break out of the very conservative framework and try out and integrate new things. Anything that does not conform to the "norm" once it has been established is frowned upon.

In summary, it is sobering after all. But there are a few exceptions, sound engineers who rightly bear this title and produce very beautiful sound images / experiences.

<zurück: Myroklopädie>
<zurück: Myro>