I would like to discuss about the way MIDI protocol maps the musical pitch.
Since thousands of years ago societies have discretized (divided) the musical octave to establish different musical systems, so we have assumed discretization as the natural way of representing music. But at the same time we have been constantly using the oldest instrument in the world, our voice, which is one that it is not discretized in any way.
So my question is: Which reasons might prevent us from representing tones merely with their fundamental frequencies? Since we don't really need a 'fretted' voice or a fretted violin to play in tune, one might just prefer to send frequency/pitch MIDI messages from a MIDI controller instead of an arbitrary tone number that must be remapped each time another musical system is used.
Monophonic virtual instruments would benefit from this approach, since there would be no need to send MIDI note off messages for each tone that is played. At the same time there would be no need for additional pitch modulating messages, since the performer would modulate it in the same way vibrato or glissando is played in a violin or when singing. Some new commercial controllers go in this direction.
I don't mean the current MIDI protocol should be modified, but i think it should add a simplified protocol for monophonic playing so performers are no longer constrained to preset parameters like portamento time, pitch range, vibrato amplitude and frequency...