The light bulb ban is evil because it treats people as if we are, in the aggregate, a “system” to be tinkered with. Change an input here, and the output will change there.
Humanity and our aggregate actions and decisions, our interactions with each other and the physical world, the physical world’s effects on us – the sum total of these things is seen, in the progressive-left philosophy, as a great “system,” which we are competent to tinker with and “improve.”
This is one of the main sources of immorality and vice in the modern world: treating people as if, added together, we amount to a giant propulsion system, which can be tuned up and have its performance changed. The evil here lies not in studying or analyzing human activity in the aggregate, but in the collateral – inevitable – temptations, such as attempting to reorder it by force.
We do study the aggregate patterns of humans, but doing so has not always been considered a morally good activity. There’s a good reason for that. When we look at our fellow men as elements of a system, rather than as a host of individual moral actors, to whom we owe certain forms of respect (and who in turn owe that to us), we lose our moral focus. It’s all too easy for us to do so, and to translate the amoral approach of science and engineering into an amoral approach to social or political action.
But there can be no sustainably “amoral” approach to human interaction. Our treatment of each other is inherently a moral question; there is no such thing as amoral human interaction. We are either acknowledging our obligation to behave morally toward each other, or we are behaving immorally.
The light bulb ban doesn’t acknowledge our obligation to behave morally toward each other. It operates on a premise that there is a higher ethic, one that allows us to bypass moral obligations and force our fellow men to conform to our concept of a better-performing “system.”
This doesn’t mean government can’t do anything. Without defending every aspect of government intervention in electric power over the last 120 years, we can nevertheless distinguish between what’s being done today and what governments did a century ago. For example, it was a morally respectful purpose to want to extend grid power to areas where the market didn’t yet make it a profitable venture. That’s because the project involved making an offer to potential customers, not forcing them to participate.
That approach is a morally respectful one, according the potential customer the choice he has a right to. (This isn’t to say that governments would necessarily handle the operation well. And there may be other, perfectly valid objections to it, from the taxpayer’s point of view. What I’m suggesting here is just a judgment about the purpose.) If governments had stepped in to outlaw tallow candles, gas lanterns, and hearth fires, in order to channel farmers and woodsmen into getting their homes wired, that would have been evil.
The core issue is always government force. Advertising specialists and marketers make their living studying us in the aggregate, and advising retail companies on how best to change our patterns, so that we’re buying the companies’ products. But no one can force us to buy anything by that method. Entrepreneurs have lost as many fortunes as they have gained, trying to induce us to buy their products. It would be foolish of them to forego market research – but market research is no guarantee of anything.
The same is just as true of whatever “scientific” conclusion we think we have arrived at in order to justify regulation. This is a secondary, if important, point against prophylactic government. We’re almost always wrong, as it turns out. What we thought we knew was incomplete or was becoming outdated by the time we tried to regulate on the basis of it. Regulation by the state rarely produces the outcome it purportedly hoped for.
There are a few cases, important but narrow ones, in which regulation with force behind it has genuinely made life better for everyone. Public sanitation – waste management and clean water – is a big one. Another is programs of inoculation against disease. In each case, however, humans are seen not as elements of a larger system that requires tweaking, but as the purpose and end product of the enterprise. We inoculate people to keep people alive and disease-free. That’s the same reason we treat our water, and manage waste so that it doesn’t build up and harbor toxic growth.
Equally important is the fact that the communication of certain kinds of diseases and toxins is something people have little or no discretion over. It isn’t a matter of choice whether you get infected by certain rampant diseases. If they can be communicated by breathing, coughing, or drinking water, you’re going to be at risk. The consequences, meanwhile, are neither theoretical, nor abstract, nor deduced through complicated steps; they are straightforward, first-order, and empirically demonstrated over and over again.
It’s one thing for local public authorities to take action on these matters. It’s another for governments to latch onto abstract theories about the performance of an aggregate human “system,” and try to force involuntary behavior on their fellow men just to serve that abstract idea. The latter, unfortunately, is the description of most regulatory efforts by American state and federal governments over the last 100 years.
The incoherent idea of “redistributing income” is an abstraction about an aggregate human “system,” one that modern taxpayers have been made to serve. So, for that matter, is the idea of government harvesting metadata from people’s communications. Analyzing metadata can be done because a complex system of human interaction exhibits patterns. That doesn’t mean it should be done; certainly not by the citizens’ own government, which has a gun to the heads of both businesses and customers.
Unquestionably, metadata analysis sees our interactions not in moral terms but in utilitarian, mechanical terms. In principle, this is exactly the same way the light bulb ban sees our interactions and decisions. If you’re worried about the NSA surveillance program, but not about the light bulb ban, you haven’t thought about any of it very hard.
Complexity in our human interactions hasn’t divorced them from the fundamentally moral basis of the Bill of Rights. But with our plethora of alphabet-soup agencies and their open-ended charters to study and regulate us, we have created a gigantic space of moral twilight, in which our rights are encroached on because we have authorized government to see us as a “system” to be exploited and tweaked.
That’s not what we are. That’s not what our fellow men are. If you think you’re seeing a “system” around you whose performance you need to have control over, because its costs will be borne by you, you’re invariably looking at an artificial system created by government mandate. The answer is never to start controlling your fellow men, or even to change how they’re being controlled. The answer, if you don’t want to be made to bear costs you didn’t sign up for, is to get rid of the mandate.