The transistor symbol

Together with Maple Martin I browsed through our group’s library and came across a couple of books by Prof. Kjell Jeppsson (from Chalmers University of Technology). One of the books, “Praktisk transistorteknik” (1965), triggered me – of course. Browsing through the pages, I realize that the transistor symbol he used in his figures looked unfamiliar to me. It was the 1965 version of the Swedish standard symbol for the junction transistor.

Where does the symbol come from? My short-story/interpretation.

So, in case you might be taking the course in analog electronics at the moment: this post aligns quite well with the topic we are currently reading. Take a quick glance at my impressionistic skills below. I have depicted the first point-contact transistor (to the left) and the “first” junction transistor (to the right). It is pretty obvious from where the – today, widely used – bipolar symbol comes. The symbol is found at the bottom left of the picture. Above that my redrawing of the famous Bell Labs photo. The v-shaped piece of plastic, on which the phosphor-bronze traces where applied, guides the emitter and collector to and from the germanium plate which is attached to the metal frame which the base in turn is connected to. The “housing” around the transistor is modeled by a circle around the lines.

To the right in the picture, we see a sketch of the junction transistor. A more homogeneous solution. From left to right we have the emitter, base, and collector. Here the currents go “through” the semiconductor whereas in the point contact transistor it goes on the surface (well, arguably, but true to a first degree …). Looking at the international symbol, it does not really make sense – if one has time to care about those kind of things. The Swedish standard institute (SSI) symbol, from 1965, is depicted below the junction transistor. It turns out to be a bit more of logic behind that one. The base “cuts” the emitter and collector and the current goes straight through the base. However, the symbol lost the battle.

Transistor symbols

Transistor symbols

I guess the thing was that the junction transistor was invented and patented quite soon after the delivery of the 1947 Christmas present in the shape of a point-contact transistor at Bell labs. Due to the more integrated nature of the junction transistor it was also a better choice for most users. In addition, the junction transistor has much higher gain (200 vs 20), was less noisy, and could take on higher power levels. (Not as high as for tubes which were even faster. In fact the point-contact transistor initially had a higher gain-bandwidth product.). Due to this rapid development, the old symbol made it into the books. There was no point in developing a new one (unless it was exported to another continent).


This post is not supposed to be very comprehensive when comparing the MOS with the BIP (bipolar) transistor. However, I thought it could be nice to have it outlined on a single sheet here. MOS stands for metal oxide semiconductor.

Inte picture below we find Lilienfeld vs Shockley. (Yes, yes, yes, one can argue about who did what, etc., but I will let them represent the two transistors). Lilienfeld represents a more square symbol and a “simpler” expression of the current as function of input voltage (in its desired operating region): a polynomial – the square of the input voltage, ube. Shockley presents an exponential function instead – the diode equation. I have sketched the currents as functions of the input voltage and we see that even though the square (MOS) is stronger in the beginning, the exponential quickly comes up to pace and produces higher currents.

The MOS layout is more compact compared to the bipolar. The MOS offers an infinite input resistance (well). The bipolar does not. In fact it must have an input current to operate as desired.

Considering the small signal schematics, the parameters gm (transconductance), gds/go (output conductance) and gp (input conductance) can all be derived as dependent on the current through the transistor. The higher current, the higher everything, sort of. Arguably, this implies that the gain is more or less with current.
At the bottom of the figure, we find the intrinsic gain of the transistors.

For the MOS it is the Early voltage over the effektive input voltage, i.e., gate-source voltage minus the threshold voltage. For the bipolar it is the Early voltage over the thermal voltage (~26 mV). These two gain expressions actually tell us that it is quite likely that the gain is higher for the bipolar than the MOS! (This can also be seen from the MOS transistor operating in the subthreshold region).

Why larger? It is hard to push down the MOS effective voltage to the required 52 mV to match the bipolar relying on the thermal voltage.


Developing labs

We are facing quite a lot of challenges within the field of electronics at our university. In short: there are fewer and fewer students taking electronics courses and we should adapt to that situation.

There would be at least two ways to address the problem: either we scale down or we make the courses more interesting such that, in the end, more students will choose to study electronics. The “problem” however, might be that we are a bit late to offer this change at the university level. Most likely we have to be much more active and visible for children/pupils already in their early teens.

Anyways, while changing courses, why not study one of the perhaps most important elements of the course: the laboratory. This is in some sense the only occasion when the students can practice and try the theory in a context. That sounds easy – doesn’t it? Take a course in basic electrical circuits: It might contain course elements such as DC (Ohm’s law, KCL, KVL) and AC currents (jw, power), as well as something around frequency analysis (amplitude characteristics). Three main parts of the course. Easy as a pie: introduce three laboratories – one on each subject. Happy days. End of story. … Or?

A while ago I visited a seminar hosted by Anna-Karin Carstensen at the Norrköping Campus. A while ago they were intensly studying how the learning process takes place in the laboratory series. They monitored students, filmed them during the laboratory work (asking for permissions of course). Then they analyzed the results. It was for them then quite visible where there were flaws in the laboratories, regardless if theory had been taught in lectures or not. It did not matter if the lecturer thought that all material were there, at the students’ hands. There were simply not the required processes enabling the students to form the links connecting chunks of knowledge/wisdom to to move on in the laboratory series and grasp the knowledge.

It was part of Carstensens Ph.D. studies to monitor these laboratories and develop a method to create a new laboratory where students should more easily link between pieces of knowledge to understand the “whole” picture.

Yes, I know that nomenclature fails me, I am not trained in this field of research. I am trying to give my review in a straight-forward approach.

Consider the picture below, which at a first glance might look a bit simple. It depicts the learning processes, the links, in a laboratory, where the aim is to sort of “understand the Laplace transform”. How do you make the connection between time domain, frequency domain, poles and zeros, and the Laplace transform. The task of the lab is to curve-fit and find important parameters of the step response of an RLC circuit (resistor, inductor, capacitor). By doing this, a better understanding for how the location of poles and zeros, i.e., coefficients in the Laplace polynomial, affect the step response, should be developed.

This picture is from Carstensen’s dissertation and I have got the permission to publish it here..

Now, the point is to sit down and actually look at the laboratory and its manual. What pieces of the puzzle do we have at hand? How will the students see these pieces? You want the students to bridge all links (verbs btw, actions), such that they move around freely and “understand”, “conceive”, in the graph above.

Let us start in the top left corner of the circle (you know what I mean…). Students are given a real circuit, including a schematic (at this stage of their training, they conceive the schematic as a “real circuit”). From that you derive the differential equation. From there on, through replacing operators with s or 1/s, you get to Mr. Laplace. Or you would get to Laplace from the real circuit by doing some KCL&KVL exercises and replacing C with 1/sC, etc. Through tables you would get from Laplace to the time-domain representation, you would be plotting it and you would be comparing it with the measured graph. The graph is measured on our real circuit. And we are back at stage one. That sounds pretty straight-forward right?

Well, yes, perhaps. But actually, you want the student to understand, not just walk around the circle and applying a standard set of rules. You want them to do the connections and not a) get stuck in smaller loops and not b) run the outer loop. They must be able to get from any point to any other. Those links, or enablers for them, must be in the lab too, otherwise it is just a fill-in-the-blank-boxes exercise.

Much more can be said, I just wanted to inspire to do some more reading at before you plan your next lab and I will try to adapt this way of thinking for next year’s TSTE92 Electrical Circuits.

Another bound on power consumption in DACs

I wanted to go back to the previous post where we investigated the trade-offs between speed, resolution and power consumption in a digital-to-analog converter (DAC). That gave us a bound in that triangle.

What if we use another starting point in our argument rather than noise?

Same type of DAC

Consider the DAC in the figure. Same type of DAC as last time. We have a current through a resistor forming the voltage. What is the minimum possible quanta through the resistor? Leading question … Assume the least significant bit is determined by one single electron during the sampling period.
The average current for a single electron is given by

I = q_0 / \Delta T = q_0 \cdot f_s

where f_s is the sample frequency and q_0 = 0.1602 aC is the elementary charge of the electron. (Let us ignore quantum effects and those things. Let us be traditional and assume we know where the electron is, before we open the box with the cat, …)

Small value?

So, is this a small or a large value?
Well, assume a resistive load of R_L. The (average) voltage for that particular charge, corresponding to one least significant bit, would be

\Delta V = R_L \cdot I = R_L \cdot q_0 \cdot f_s

Assume 100 Ohms in the load, and a sample frequency of 600 MHz. We get

\Delta V = 100 \cdot 0.1602 \cdot 10^{-18} \cdot 600 \cdot 10^6 = 10^{-8}

which is 10 nV. This would correspond to something like:

  • In a 16-bit converter, this would mean that the peak voltage is some 655 uV.
  • In a 20-bit converter, sampled at 1.2 GHz, the peak voltage is some 21 mV.

The voltages can simply not be less than that for the given sample frequencies and resolutions. Otherwise we have to split the electron (buying Swedes a cake).

I think it is actuall rather interesting: the faster you sample, the less electrons will be there for each least significant bit (LSB).

Full-swing signal

With a full-scale sinusoid signal in place, we can find the average power as

P = V_{ref}^2 / R_L/8 = 2^{2 N - 3} \cdot \frac{q_0^2}{T^2} R_L = R_L \cdot 2^{2 N - 3} \cdot q_0^2 \cdot f_s^2

which is then the absolute minimum possible power that must be consumed to obtain a certain resolution.

Are we having fun yet?

Just for the fun of it, let us rewrite the formula a bit

P = C_L R_L \cdot 2^{2 N - 3} \cdot \frac{q_0 }{kT} \cdot \frac{kT} {C_L} \cdot q_0 \cdot f_s^2 = 2^{2 N - 3} \cdot \frac{q_0 }{kT} \cdot \frac{kT} {C_L} \cdot q_0 \cdot f_s / \pi

where we have assumed that we also have to guarantee the bandwidth, not just the sample frequency and voltage levels.
In the equation, we can identify the 26-mV term (q/kT), a noise power kT/C, and some constants. Possibly, it could be related to the previous post. Philosophically (?) one could also think what the thermal noise looks like when we push single electrons back and forth.


How fast do we need to sample over a 100-Ohm load to get a 1-V drop with a single electron? (Once again, from a mathematical point of view, and possibly not the correct physical description of the scenario).

V = R_L \cdot I = R_L \cdot q_0 \cdot f_s = 1

f_s = 1 / (R_L \cdot q_0) = 62 \cdot 10^{15}

Ok, so 60 PHz is rather fast… if we would correlate with light, it would end up in the UV domain, 100 times less than visible light. And now we kind of entering the photoelectric effects, sort of …