Z1a - The Video Card
Why not start with the most complex part? If I can't get that to work, I can just give up, quietly shove the parts in a cupboard and pretend the whole stupid thing never happened.
First though, I had to make some decisions. Like, what sort of connector and associated video standard to use?
- Many home computers of the 1980s just sent PAL video to a television, either pushing a UHF signal up the TV's aeriel socket, a composite video signal up the TV's auxiliary input, or an analogue RGB signal with PAL-equivalent timing to the TV's SCART socket. Those computers ran fast enough to generate the necessary signals, so mine ought to be able to too, right?
- 1990s IBM-compatible PCs usually plugged into a monitor via a 15-pin VGA connector, which also carried an analogue RGB signal, but at higher resolutions and frame rates. I'd have to do a bit more research to see whether my hardware could run fast enough to output VGA signals.
- Nowadays, most things use HDMI. But it quickly became apparent that my little assemblage of thru-hole chips wouldn't be able to generate digital signals at HDMI bitrates. Also, HDMI was totally not a thing in the 1980s.
And, what kind of components to use to generate those rapid, carefully-timed video signals? 8-bit home computers back in the 1980s used various devices to cycle through video memory addresses and generate the necessary timing/sync signals.
- The BBC Micro and Amstrad CPC used a dedicated off-the-shelf chip, the 6845 CRTC. But chips like those don't seem to be in production any more, and I was determined to steer clear of the dodgy ebay chip market.
- The ZX81 and ZX Spectrum used a custom ULA chip. Similarly, the VIC-20 and Commodore 64 used proprietary VIC chips. Custom silicon is was a bit beyond my means, and I wanted to avoid FPGAs and other programmable logic devices.
- The ZX80 used a bunch of discrete logic chips, which was a distinct possibility if I could find some in thru-hole packaging that ran fast enough.
- Also, the ZX80 and ZX81 cycled through memory addresses by tricking the CPU into executing video memory. This is not an option I seriously considered, I just like to mention it because it's a genius cost-cutting hack.
But while researching all this, I learned that various clever people — such as Ibragimov Maxim Rafikovich, Andrew Rossignol, and Lucid Science's Brad Graham — had been using 8-bit AVR microcontrollers to produce VGA video. These microcontrollers have a maximum clock speed of 20 MHz, which it turns out is just enough to chuck 256 bytes of data to one of its output ports in the time it takes to display one line of VGA video.
I already had various AVR microcontrollers kicking around, so that seemed like a promising avenue to explore. I looked up the VGA timings, and immersed myself sufficiently in AVR datasheets to learn enough of its assembly language to be useful. My implementation would be my own, though I did end up finding and copying Brad's interrupt latency fix.
And out came the breadboard:
I had trouble getting a stable signal at first, which turned out to be due to interference between the crystal connections and an adjacent output pin. If you're breadboarding this sort of thing, a packaged oscillator works better than a discrete crystal and caps.
Once I'd sorted that out and debugged my software, the AVR chip was generating something my monitor recognised:
Next problem. A computer's video hardware needs to pull data from its memory and turn it into pixels at very precise intervals. But the CPU also needs to access memory very frequently. Again, those old home computers had various ways of ensuring that the video hardware and CPU played nicely and didn't fight over memory access. Some, particularly machines with a 6502 processor, would take it in turns. Some would make the CPU wait until the video hardware wasn't busy (the ZX Spectrum's ULA did this by momentarily jamming the CPU's clock input, a technique I childishly like to call clock-blocking). In others, such as MSX machines, the CPU couldn't access video memory directly, and had to ask the video controller nicely to put things in there.
These schemes are all a bit complicated and require the CPU and video hardware to both be marching to the same clock signal. This was a bit of a concern; I knew my video chip would be running at 20 MHz, but given that I didn't really have any experience in electronic design, would I definitely be able to get the rest of the machine running at 20 MHz? And, would I want to stop at 20 MHz if the Z80 could be pushed any faster?
In the end, I cheated. My video card includes 64k of dedicated video memory, on a dual-port static RAM chip. Dual-port memory is so-called because it can be accessed by two things at once. My video chip and CPU can each run at their own clock speeds, each reading and writing video memory whenever they feel like it. 1980s home computers generally avoided dual-port memory because it was very expensive for a minor performance improvement, although it did appear on Sinclair's wishlist for their fantasy Loki computer.
This was my first non-trivial printed circuit board design, and while it probably could have been smaller, it works, so I'm happy with how it turned out.
The square chip is the video RAM. The large rectangular chip is the AVR ATMega644P, which spits out VRAM addresses, horizontal and vertical sync signals, and vertical blanking interrupts. Bytes from VRAM pass through a buffer chip, and the red, green and blue bits go to resistor arrays to be turned into analogue voltages.
Display resolution is 256×240, and the frame rate is 60 Hz, using the sync timings for 640×480 VGA and drawing each row of pixels twice. Because the horizontal resolution is 256, pixel addresses in VRAM are nice and easy to calculate: the low byte is the x-coordinate, the high byte is the y-coordinate.
The colour depth is 8 bits per pixel, but I think the colour mapping I've used is unique: 3 green bits, 2 red bits, 2 blue bits, and 1 magenta bit which is a common least significant bit for the red and blue channels. This give similar results to the more common 3-3-2 RGB, but makes eight shades of gray available.
The last 4K of VRAM is not normally displayed; of this, the last two bytes are used as hardware scrolling registers. Written by the CPU, and read by the video chip during vertical blanking, they indicate the coordinates of the first pixel to be displayed.
Once I'd built this card, I initially ran it without RAM, letting the AVR chip generate its own test patterns. Then I added the VRAM chip and updated the firmware to route data from VRAM to the video output, at which point the card just displayed the random contents of uninitialised video memory. Unexpectedly, there is a visible pattern of vertical and horizontal stripes in what I expected to be random data, which I guess is a quirk of the VRAM chip's physical characteristics:
I hadn't designed in the ability for the AVR chip to write to video memory, but realised I could add it by soldering a single wire to the back of the board, because what's a PCB without at least one bodge wire? That way, I could have the AVR chip write test patterns to video memory and confirm that the right pixels were being displayed in the right places.
And that was about as far as I could get without adding a CPU...