AHC VTR: Colour videotape was component right from the start

The first video tape recorders, these being Qudruplex machines, first appeared in 1956, in the same country that previously pioneered colour TV.
Yet the Qudruplex tape format, and all subsequent VTR formats, until the 1980s, were all composite video. Now, composite video was introduced as a way to broacast colour while mantaining compadilbility with black and white TV, and reducing bandwidth in comparison to RGB colour.
I don't understand why pre-1980 colour videotape formats were composite video, given that these formats would have been designed with colour in mind right from the start. Why couldn't TV stations just work with component video and then mix down to composite for broadcasting?
Component video actually offers more editing accuracy than composite, for complex techical reasons.

Let's suppose that all (professional) colour video formats were component video right from the start.
 
Cost.

You could convert those Ampex B&W tape machines to color with a kit, and Television stations loved saving money.

The other reason was for broadcast, getting just composite color timed and synced across a whole TV station from edit to playback was hard enough
 
Well okay, a black a white tape machine would be easier to convert to composite rather than component. But weren't those tape machines designed with colour in mind right from the start?

It seems that the idea behind video recording was to replace film in the same way that electrical audio recording did with acoustical recording.
Limited resolution of the video formats (including most but not all) professional video formats, as well as the use of composite video (sometimes resulting in cross-colour or colour fire, especially in PAL) seemed to prevent analog video from doing that.

There are certain complications with editing composite video in it's native form. Both NTSC and especially PAL place restrictions on editing accuracy.
With NTSC, there are boundries every four fields (two frames) such that it is only possible to edit at these boundries, or otherwise disturb the colour subcarrier.
In case of PAL, the phase and the colour subcarrier only reach a common point every four frames, so editing accuracy is further resticted.
SECAM, the most robust colour system in broadcasting, can hardly be edited at all it its native form.

Component video is the easiest to mix, because there is a separate chrominance channel or two, and easiest to edit because the editing accuracy can just be as high as a single frame.

It would be interresting to see if component video production was at least proposed in the 1970s or earlier.
 
The first video tape recorders, these being Qudruplex machines, first appeared in 1956, in the same country that previously pioneered colour TV.
Yet the Qudruplex tape format, and all subsequent VTR formats, until the 1980s, were all composite video. Now, composite video was introduced as a way to broacast colour while mantaining compadilbility with black and white TV, and reducing bandwidth in comparison to RGB colour.
I don't understand why pre-1980 colour videotape formats were composite video, given that these formats would have been designed with colour in mind right from the start. Why couldn't TV stations just work with component video and then mix down to composite for broadcasting?
Component video actually offers more editing accuracy than composite, for complex techical reasons.

Let's suppose that all (professional) colour video formats were component video right from the start.

Well... Commercial video recorded the over-the-airwaves signals, so had to use that format. Professional video was able to use the huge installed base of commercial products to leverage their systems.

A special format just for professional use would have been very expensive, imo. Moreover, once you abandon the commercial standards, you might well get several different companies all trying to promote their own high-end formats which would make it much, much worse.
 
Audio tape recorders at radio stations in the analog era also recorded signals for the puprose of playing over the air. But in case of FM stereo radio stations I imagine they didn't record in quite the same format as they broadcast. FM stereo broadcasting is component stereo, with a side channel modulated onto a subcarrier superimposed on the mid channel. But it seems that all stereo recording formats, all these (analog or digital), being electrical recording formats, are component stereo.*
So if component strereo is confined to FM stereo broadcasting with all stereo recordings being component, why wasn't the equivalent the case with (analog) video?
While commercial video recordings recorded signals for the purpose of transmitting over the air, and thus more or less had to record in the same picture format and scanning system as broadcast, I don't see why they had to record composite video rather than first recording component video and then modulating the component signal onto a carrier and superimposing it on top of the luminance upon transmission. But the way it was done, until the 1980s, was as if FM radio station tapes recorded in component stereo.
Television stations in SECAM countries actually used PAL internally, because of problems with mixing SECAM signals together.

*The difference isn't quite analgous, because the stereo subcarrier for FM broadcasting is entirely above the mid channel bandwidth.

Bear in mind that it took until the digital era for video to replace film for cinemas, despite the fact that movie producers didn't record pictures and sound for the purpose of transmitting over the air. A video format developed for cinema movies, even in the analog era, could have had a higher resolution than television of the time.
First of all, television systems with as many as 1,000 lines existed as far back as WWII. Later on, there did exist a few analog video recording formats that did seem to offer 35mm film like resolution, yet they still didn't get favoured over film.
 
Top