Sunday, February 17, 2008

Analyzing the Juno-106 DCO circuit

In the previous posting, we discussed how the core of a typical VCO works, and the main sources of drift and instability in a VCO core circuit. The Digitally Controlled Oscillator (DCO) is a 1980s technology designed to eliminate most or all of these sources of inaccuracy, while still operating as basically an analog VCO. The DCO was a transitional technology that arose as soon as inexpensive microprocessors were available that were fast enough to control the timing of an audio oscillator circuit, but not yet fast enough to compute the oscillating waveform entirely in the digital domain. (A secondary consideration was the fact that fast 16-bit digital-to-analog converters, although available, were still quite expensive at the time.) The distinguishing characteristic of a DCO, as opposed to a purely digital oscillator, is that the DCO does not have a D/A converter in the audio signal path.  (It may have a D/A converter in the control path.)

The Roland Juno-106 is a well known synth that uses DCO circuits. Like the conventional VCO core discussed in the February 8 post, the Juno-106 DCO is a sawtooth-core oscillator, and it produces the sawtooth waveform by charging a capacitor at a constant rate. However, unlike the VCO, the discharge of the capacitor is not determined by a voltage comparator; rather, the charging time is controlled by a counter. Whenever a note is played or anything else causes the desired frequency to change (such as the action of the LFO), the CPU calculates a binary number that is fed into a countdown counter. The counter counts down on a high-frequency (MHz range) master clock, which is crystal controlled so that it is highly accurate. When the counter reaches zero, it triggers the transistor that discharges the capacitor.  The circuit is shown below; the elements shown in purple are the elements of the DCO circuit that replaced other elements, which you can see by comparing to the figure in the VCO post.   

This setup uncouples the capacitor charging rate from the desired output frequency; the integrating capacitor discharge is controlled by the timer and is independent of how charged the capacitor is.  The 106's module board CPU figures out what starting value to set the timer to using the number of the note played (another CPU which scans the panel and keyboard, and handles the MIDI ports, sends this info to the module board), plus figuring in the pitch bend, LFO, and portamento.  It does a calculation based on this info to figure out the current desired frequency, inverts that to get the period, and loads that value into the timer.  The timer, once loaded, free runs at that value until it receives a new value.  

However, it is still necessary to charge the cap at a rate that is proportional to the frequency; otherwise, the sawtooth will not maintain a constant amplitude as the frequency changes. (Consider the extreme case of a charging rate that does not vary at all with frequency. At low frequencies, the cap will have a long time to charge and will reach a high voltage. At high frequencies, the cap will have little time to charge, and the resulting waveform will be almost flat.) So the CPU produces a control voltage (through a D/A converter), concurrent with the digital information sent to the timer, to charge the cap.  Note that this circuit does not use a constant-current source to charge the cap; it simply passes the control voltage through a current-limiting resistor.  It can get away with this because the control voltage has no effect on the output frequency; it only effects the amplitude.  Inaccuracy in this control voltage could cause the amplitude to vary with frequency, but it won't vary the frequency itself.  

(A note here: My previous post on this topic to VSE was in error in stating that the Juno-106 DCO circuit uses a constant current circuit to charge the integrating capacitor.  A closer examination of the schematic shows that it does not.  Which leads us to our next topic...)

When I examined the whole DCO control logic, I found something surprising.  When the CPU is computing the digital count value to be loaded into the timer, and fed to the D/A converter to produce the control voltage, it takes into account the things you'd expect -- the note that was played, pitch bend, LFO, and portamento.  However, it does not take into account the range selection!  Interestingly, the 8 MHz master clock, contrary to conventional wisdom, is never seen directly by the DCO timers.  Instead, it goes through a divider which divides the master clock down according to the DCO range selection: 

  • 4' range: 4 MHz
  • 8' range: 2 MHz
  • 16' range: 1 MHz
The timers do not know what range is selected; rather, changing the range changes the speed at which the timers are clocked, to produce the various frequency ranges.  The cap charging control voltage is compensated by selecting one of three current-limiting resistors according to the range setting.  I did some math and realized why they probably did it this way: it's a design compromise between frequency resolution and the lowest achievable frequency.  In order to play all notes in tune, and not experience pitch zippering (audible stepping in an increasing or decreasing glide) when the LFO or portamento are varying the frequency, the "steps" between timer count values have to be as small as possible.  The 106 uses (somewhat surprisingly) Intel 8350s for the DCO timers (the same part is used in many PCs for the real-time clock and for video timing).  These accept unsigned 16-bit count values.  I think the software is setting them up in the "square wave" mode, in which the count value determines the length of one whole cycle (rising edge to rising edge).  Doing the math, if the timer were clocked directly at 8 MHz, then at the maximum possible timer value (65535), the lowest frequency possible would only be 122 Hz, obviously not low enough to play the full bass end of the scale.  Using the lower clock frequencies makes the steps larger, but allows for lower minimum frequency; at the 1 MHz rate (16' range), the minimum frequency is about 15.3 Hz, with steps between count values of 0.2 cents.  

Operational characteristics of this circuit: The synth has a master tune knob on the back, and although it's fairly stable, it does need tuning now and then.  What can drift to cause the need for a tuning knob?  The only thing that can is the master clock.  It's an uncompensated crystal oscillator, which can drift with temperature and also as the crystal ages.  I check mine about every 8 weeks, although it doesn't usually need adjusting.  As for differences in tonal response: An undesirable characteristic of the 106 that is often commented on is the peculiar sonic quality of the output in the unison mode.  It is totally unlike the unison mode on a VCO polysynth such as, say, the Memorymoog.  From the analysis above, it is easy to see why: two VCOs are never going to be perfectly in tune with each other; there will always be transient discrepencies.  This constant shifting of the phase angle between the two VCOs is a sound that is pleasing and interesting to the ear.  But on the 106, all six DCOs are driven by the same master clock, so if all six timers are loaded with the same value, then the output of all six will not only be perfectly in tune, but also in a fixed phase relationship.  The signal doesn't vary at all, and the ear quickly tires of it.  (Why the software designers didn't add in some small offsets for each voice when in unison mode is a mystery to me.)  An even odder characteristic is that the phase relationships do change every time a note is played; if you put the 106 in unison and play the same note repeatedly, you can hear the tonality change every time the key is re-pressed.  I'm not sure what causes this.  My only guess is that there is some other process that interrupts the module board CPU asynchronously, and it can randomly upset the timer loading process such that they start at slightly different times (hence phases), relative to each other, each time a note is played.  

One other thing I note about the 106 DCO circuit is the difference in response to a rapidly varying control voltage. In an all-analog VCO circuit, when the control voltage changes, the circuit will "notice" immediately; the constant-current source will respond by changing the capacitor charging rate whenever the control voltage changes. If the change in the control voltage is rapid enough, the sawtooth will be "bent" as the charging rate changes in the middle of the cycle, and the cycle time for that cycle will be proportional to some average of the control voltage over the length of the cycle. In the DCO circuit, however, the timer counter is only loaded at the beginning of a cycle (to prevent glitching), and it does not "notice" any change in the control voltage during the cycle. So in the case of rapidly changing control voltage, the DCO frequency is changing in discrete steps on a per-cycle basis rather than quasi-continuously.  

The 106 provides three waveforms from the DCO: sawtooth, pulse, and a suboctave square wave.  The sawtooth that you hear is the direct output from the DCO core.  The pulse wave is derived from the sawtooth via a wave-shaping circuit.  An interesting detail of this is that when the pulse is de-selected, there isn't a switch or transistor disconnecting the pulse output from the circuit, as you might expect.  Rather, it "mutes" the pulse wave by over-controlling the wave shaping circuit such that it goes to a 0% duty cycle -- a DC output (which I assume gets filtered out somewhere, but I haven't yet found where).  The suboctave is produced by taking the DCO timer output and dividing it by 2, via a D flip-flop.


RevoJak said...

Wow this is great!!! Flip at world conspiracy let me borrow his juno 106 when I was 16..I'm 28 now.. I am actually looking for a new analog synth with juno or sh101 characteristics.. Currently I only have a future retro revolution that is the only true analog piece I have... Your dont have anything for sale do you? Great to meet you and thank you for the post..Mark

RevoJak said...

please email me I live in

Qwave said...

Again a good and easy to understand text. Thanks !
But I found a little error: the schematics shown says: A/D converter left of the current limiting resistor.

Benjamin Budts said...

great somebody analyzes this and explain it.

I wonder how the chorus works on these synths, that's what makes them popular nowadays, the fat sound (Never played one though).

Wallstreet said...

Nice post! Found this searching google. I think my crystal is producing an out of tune C these days...

I had noticed the change in the DCO frequency with rapid CVs. You can tell it's a cycling when you hit the keys fast. Interesting to read about why that is happening. I thought I had yet another problem with my Juno.

Prince.Cobra said...

i hated the sound of my 106 in unison mode, and i wanted to do something about it. So i looked through the schematic and found that all the vca/vcf chips get routed to one op-amp. before the op-amp they use a simple mixer (a resistor of, if i recall it was 3megs for each output of the 6 voices) to combine the 6 voices before they enter the uninvented side of the op-amp...
So i de-soldered 5 of the resistors, and routed them to 5 switches out the back. So now the voices go out form the original simple mixer to switches then back through the resistor value then into the op-amp.
Now when i play in unison mode i can choose how man voices i want on against the un-effected one.... any way it works great the other voices go to ground when switched off and now i have a bad ass Juno 106 mono synth.