Wednesday, December 2, 2009

Watt’s Up With That? Part 1

I’ve been thinkin’ about watts lately… the wattage ratings that harp amp makers claim for their amps. There is no standard method of measuring amp power being used at all. Sometimes it seems the numbers are related more to marketing than to real engineering.

There is a rule of thumb used by some amp makers that goes like this: a certain kind of power tube has the potential to produce X watts of power. So, if they use 6L6 tubes they claim 20 to 25 watts per tube; if they use 6V6 they claim 9 to 12 watts per tube.

But the amp circuit design has a LOT to do with the amp getting to that potential. For example, a cathode biased amp has a tough time getting beyond 25 clean watts in a 2x6L6 amps, while a fixed biased amp may get all the way to 50 watts. (For now, don’t worry about the technical mumbo jumbo. Just agree with me that amp watt ratings can be fuzzy.)

You might notice I wrote the phrase “clean watts” in the previous paragraph. Why would a Chicago Style blues harp player want a clean amp? Well, you don’t, but the amount of power an amp can generate before clipping (the amp starts running out of power) is an important measure of its performance. Hi-fi buffs will recognize this spec from their favorite stereo: 100 watts RMS per channel @ 8 ohms with .1% of total harmonic distortion.

That last part about percent of distortion is the missing piece in harp amp power ratings. Tube amps are capable of producing power beyond their clean power rating, and the distortion in tube amps can be a lovely sound, while in solid state and digital equipment it can be very harsh sounding.

So then… To what point do we drive a tube amp when testing for power output? Should we dime the amp all the way to get ultimate peak power? There are several problems with that: Tone sometimes degrades considerably at that level, and nobody ever plays that loud anyway because they get feedback before getting there. (My 5-watt 1970 Fender Champ is excused from both of these rules.)

What power rating will make sense to amp shoppers? How can we make the system more honest and meaningful? My proposition is this: All harp amps makers should publish a clean RMS rating as well as their best estimation of real usable power, NOT maximum theoretical power.

The clean signal should be derived by driving the amp into an appropriate dummy speaker load and measuring on a scope the electromotive power output in volts. Crank the amp until the sine wave just begins to visibly deform, back it off to clean, and use Ohms Law to calculate watts at that exact point.

All amp makers should publish this spec, and all consumers should demand it. If you take your amp to a tech and it does not produce the level of clean power specified by the manufacturer, you should return it for repair or refund.

I'll be writing more about this in the future.

No comments:

Post a Comment