BRAIN IMAGING & VIRTUAL REALITY TECHNOLOGY
PROCEDURES (early prep work, see OVERVIEW)
1: Neurons fire and emit an electrical spike (action potential) that can be sensed with a sensor (e.g. an oscilloscope or an AM radio sensor can use a base line signal on one side of the skull (powered with no signal: "flat") and measure how it changes on the other side to get past near - far measurement issues (see the Robert Malech work (C: Prior research and engineering, 1974))).
2: One radio receiver or multiple oscilloscope sensors with offset timing of multiple probes for higher sample rates listening to the whole brain spike action potential activation activity with an in-ear piece up to the tympan (+nostrils, etc.) can detect and measure a mass of sparks (100 + billion neurons) making snow static "sounds." (Skull bone structure reflects and/or diffuses energy like ceramics, space shuttle tiles, or chip-board, beaming in one opening, and measuring from another or from the reflection may help). (Malech, et. al. 1974).
3: The signal, over time, can be sliced into smaller or larger slices (frequency sample rate), the smaller the slices, the lower the amplitude, but more sparks will be differentiated (as they will not be over-laid, and summed in the signal) (~Gen. Padden, et. al: ~1976).
4: Bands can be separated by gate slicing the amplitude into layers (horizontal slices with time measured on the horizontal axis); when refined, this slicing may produce 3 or 4 audio channels (outer, inner), tactile, motor, and 2 or 4 visual channels (retina neurons, digested perceptions). (~Gen. Padden, et. al. ~1978).
5: PDP Connectionism requires two feature sets (brain-spark sensory - motor cues, and measurement device cues (from cameras and microphones, etc.)), the neuron firing data can be cut by amplitude band and sample rate, for chunks (chunks can be further examined for other patterns (# on, off (total value), top, bottom (amplitude emphasis), left right (w - multiple sample rate slices), with a potential temporal (decay of) relevance (the signal pattern is temporal, and, e.g. if matching a video feed, codec features may be relevant for the feature set correlation weighting) (general PDP cue determiinations (Rosenblatt, et. al., 1959+)).
6: With a microphone and camera measuring the same visuals and sounds as the subject, visual and sound data features can be measured and correlated with the nerve firing data features, after training a PDP net separately for each sensory data stream (thanks Ray Kurzweil), observers will be able to see and hear what a subject sees and hears, remotely by data signal. (Miyawaki, Pasley, Gallant, et. al ~2008+).
7: Reverse the PDP data by using the same A V and brain pop-corn pattern data streams to train correlations for computer data to brain activation translation, (use a pre-calculated connections table for blistering fast data translations, PDP hardware, or find some equation for real-time conversions). Feed video and audio sources to the brain (multiply out-of-sync signals to increase vividness (thanks Bill Gates: verified-- feedback & firing rates = intensity) (not louder activations, more activations of the same type of pattern (frequency of nerve firing correlates with sensation intensity)), the blind will see with cameras, the deaf will hear with microphones + a new medium is created: direct brain activated virtual reality; CGI immersion is possible by using video and audio sources other than cameras & microphones, and others' experiences can be recorded and played back via virtual reality glasses (JD Casten 2014).