Thank you so much for listening! Fold4wrap5 the vocoder's initial purpose was to compress telephone transmission to run through underwater telephone cable. The vocoder provides a compression /decompression (companding) algorithm to deliver precision transmissions of encoded speech signals sampled at the rate of 8 kHz. Most vocorder's implementation includes independent user-callable functions that perform all of the μ-law and A-law encoding and decoding operations. The most common application for the vocoder is in telephone networks. The vocorder uses pulse code modulation (PCM) to compress, decompress, encode, and decode analog speech, which can then be transmitted and received as binary data. Two forms of companding standards, the µ-law and the A-law, are specified. The vocorder's μ-law compresses frames of 14-bit linear PCM samples into frames of 8-bit logarithmic PCM code words. The vocorder's A-law compresses 13-bit linear PCM samples into 8-bit logarithmic PCM code words.
I like your visual image of granular interpolation, although I would think more like slicing a picture vertically, horizontally and spreading it diagnally! I liken my theories of micro sound as a naturally occuring event than a process (LOL all of you sound engineers think you create everything! kidding of course). More along the lines of Curtis Roads concepts, micro sound and granular inerpolation help explain a world that already exists, not create one from nothing. Dynamics processing on a micro sound or micro time scale affect the amplituted of an audio signal. Such operations as compression of the envelope amplitude of a sound, limiting, expansion, and noise gating are common in sound engineering, especially compression. Specral Dynamics by Erbe (1995) protays this process to windowed spectrum analysis. Looking at microsound as a time slice of the Gabor matrix. Spectral Dynamics applies dynamics processing, including compression individually to each time splice and spectral band. Again, this is reverse engineering, and more exactly reverse sound engineer(ing)! I recommend viewing your waveform in spectral view to get a really good picture of EHF and ELF freqs!
Again, thank you for listening and participating in the forum! Without lively discussion we will never get further down the road!
<M
---------- Post added at 06:55 PM ---------- Previous post was at 06:50 PM ----------
Thank you Tyder001 for listening ! I rather liked the Irish woman's voice better! Ah, well what do I know, I'm not an Irish woman!
BTW! I love katherine. We were speaking at a conference together and she kept commenting that she wished that she had met me and my research partner before she wrote Ghost (which I rather like!). She loved the give and take play that my partner (spiritual energy work) and I (scientifically minded) had with each other! Again, thanks for listening!
M
---------- Post added at 07:00 PM ---------- Previous post was at 06:55 PM ----------
Hello Transhuman! Thank you for tuning in to the show! Actually you are very welcome to join me next time I do an investigation. Chris O'Brien has been on many with me and seen the whole process. No, I do not fake EVPs (snicker) and no, I do not set my gear up and walk away. I do carry my field recorder with me when I record (mostly) but it is performed like about every other field recording, if you are Chris Watson recording for the BBC or Jana Winderen recording icebergs in a really far away cold place. Thank you for posting!
M
---------- Post added at 07:07 PM ---------- Previous post was at 07:00 PM ----------
Hello Apprentice! Thank you for listening! You can make your own peizio electic contact mics on the cheep by ripping appart radio shack stuff! As far as my cleaning, I only use a little click pop elimination and just a slight bit of hiss elimination. One of the things that makes EVP such an interesting form of evidence is that it has a very human character to the sound, and you don't want to lose that information that it gives you. Occasionally, I will filter a bit of the subsonic rumble out but not often.
Thanks again for posting and yes please post your hellbroth, I for one am interested!
M
---------- Post added at 07:16 PM ---------- Previous post was at 07:07 PM ----------
Hello again Apprentice, Yes I often use the Zoom (H4). You're exactly correct, to pick a radio interference it would have to be very strong and/or close by. In fact with the amount of noise pollution in the world, it's amazing radio freq even gets to its receiver! Also, Be very careful! Remember most of where we work is in the Hiss. It's like pulling seaweed from a swamp. Sound engineers never get this. Noise and hiss reduction is like burning down the house to get in the door. Even center channel extractor is often too much!
M
---------- Post added at 07:28 PM ---------- Previous post was at 07:16 PM ----------
LOL hello yet again! OK you realize of course frequencies occur naturally in ranges. In other words you can't listen to something recorded at 125 Hz and expect to be hearing 125 Hz. All waves are in range. Hiss is also not contained simply in a noise floor, because of this same concept. (a great experiment to try going into the hiss reduction feature of your software and use a hiss reduction and repeat this process forever) The EVPs are not at the noise floor because the noise floor isn't at the noise floor. LOL let me explain; remember when you were a kid on the beach and decided you were going to dig a hole in the sand near the water? Eventually water would get in the bottom of the hole. to dig the water out digs the hole deeper and more water (from the water table - see where I'm going? he he) would get in the hole. The more hiss you reduce in a file the more deteriorated the file becomes and the higher the noise floor becomes and the more hiss it creates! BTW dB is a measure of pressure, forget pressure focus on frequency!
Thanks! Great thread!
---------- Post added at 07:48 PM ---------- Previous post was at 07:28 PM ----------
Hello Maven! thank you for listening!
LOL not sure if there was much technological mumbo jumbo in there, but maybe I can help you understand what I was saying! Compression is physical. I'm going to try and get this last one in. Here is a ratio example of diaphragm compression ratios for microphones "Apparently, the real differences sonically are the compression ratio and diaphragm material - higher compression ratios are more "focussed" and intense in the midrange, and lower compression ratios are more relaxed sounding. With a 3" diaphragm and a 2" exit, the 850-PB has a lower compression ratio than the 835-PB, with its 1.4" exit. This alters the sonic presentation"
Also, some people said, for example, that a 3" or 4" diaphragm is better to reach low frequencies than a 2"
Edison cylinder phonographs worked off of a diaphragm.
The compression is physical from the source to the ear! LOL I swear I am understood more by medical doctors that sound engineers (who have built bastardised definitions for terms LOL)
The concepts of micro sound theory stray from the common notion that duration and frequency are interrelated. So the length it travels is not related to the duration in length. Again, it does relate to pressure and the sound doesnt carry enough pressure for the ear to pick up the signal.
Hit the net for the Titanic stuff there was a recorded incident a couple years after the Titanic sunk regarding Morse code transmissions.
LOL I am sorry to say that builders know nothing about demolition. From Raudive and Cass and even Von Szalsy in the 30s, sound engineers never understood and always claimed to be of superior technical academic. Gabor, Roads, Emoto and other quantum theorists also had the same problems. I thank you but inviting sound engineers to this kind of a party (like the Pye recording engineers that tested Raudive in close scientific method and concluded nothing) is like inviting butchers to a vegitarian buffet (they just wouldn't get it!)
Thank you so much for participating! I really love the great comments!
M