Begging the question

Begging the question

Rate This
  • Comments 19

In my last post I described the syllogism "Photogenic people look good in photograps; Michelle Pfeiffer is photogenic; therefore, Michelle Pfeiffer looks good in photographs" as "begging the question". A few people commented on that, so I thought I'd address this point of English usage.

In modern usage, "begging the question" has come to mean nothing more than "the situation suggests that an obvious question to raise at this time is blah blah blah." For example, "The global financial meltdown begs the question: was there insufficient federal oversight of the American mortgage industry?" Though this usage is certainly common in civic discourse and the media, it is entirely a modern departure from the historic usage of the phrase. I try to eschew this modern usage when I say "begs the question".

"Begs the question" is also sometimes used to mean "this argument raises additional questions which require additional investigation before we can accept the argument". Though this is considerably closer to the traditional definition of the phrase, this is also not exactly what I mean.

When I say "begs the question", I mean it in the traditional sense of "this argument is fallacious because it takes as a premise an assumption which is at least as strong as the thing being proven, and is therefore an unwarranted assumption."

Let me give you another example of question begging, in the traditional sense, which might be more clear.

Suppose I asked "why are diamonds very hard but butter is very soft?" and you answered "diamond and butter are both made out of atoms; the atoms of diamonds are hard and the atoms of butter are soft." You would have begged the question; your answer to my question "why are some things hard and some things soft" is "because some things are made out of stuff that is hard and some things are made out of stuff that is soft" -- that is, you've avoided answering the question by providing an "explanation" that itself cannot be understood without answering the original question -- namely, why is some stuff hard and some stuff soft? This pseudo-explanation has no predictive power; it doesn't tell us anything new, it just circles back on itself. The explanatory assumption -- that some atoms are hard and some atoms are soft -- is stronger than the thing we are trying to investigate -- the hardness and softness of two substances.

A non-question-begging answer would be "diamond and butter are both made of atoms; the atoms of a diamond are all identical and arranged in a stable, rigid lattice where every point in the lattice is reinforced by a strong bond to four other points. The atoms of butter are a disorganized collection of many different atoms grouped into different kinds of relatively complex molecules; though the molecules themselves are quite strong, each molecule of butter holds weakly to each other molecule. It takes only a small force to disrupt the loose arrangement of butter molecules but a very large force to disrupt the strong arrangement of diamond atoms. We perceive this difference in required force as 'hardness' on the human scale, but in fact it is a property that arises from the sub-microscopic-scale properties of each substance."

Now, this explanation does *raise* more questions. It raises questions like "why are some lattices strong and some weak?" and "why are some objects composed of many different kinds of atoms organized into molecules, and some composed of just one atom?" Question-begging is not the act of raising more questions. Every good explanation raises more questions. What makes this explanation a good one is that it is testable and has predictive power; we can investigate the hardness or softness of other substances, and make predictions about what sorts of atomic structures they will have -- or, vice versa, we can look at an atomic structure and try to figure out from it how hard the substance will be. We can invent other techniques for determining atomic structure, like x-ray diffraction crystallography or spectroscopic analysis, and use those to cross-check our "atomic theory of hardness".

But the "because she's photogenic" pseudo-explanation is clearly question-begging. Why does she look so good? Because she's photogenic. Why is she photogenic? Because she looks so good. We have learned nothing about photogenicity (or the lovely Ms. Pfeiffer).

Similarly, if you ask "why is this code thread-safe?" and the answer is "because it can be correctly called on multiple threads", we've begged the question. Why is it thread-safe? Because it's correct. Why is it correct? Because it's thread-safe. Again, we have learned nothing about the nature of thread safety.

  • @Random832,

    You are absolutely right: "thread-safe" does have a meaning, and not being able to formulate a definition does not mean there is no definition. But your own example of a collection class shows that the definition of thread-safety *varies* based on what we're dealing with. You have described the case of a collection; it may be different, for example, in the case of a workflow (from the top of my head, I can think of an additional danger of using up the .NET thread pool; also, I forgot the URL, but there is a wonderful article somewhere in the depths of MSDN, describing the way to create workflow-based ASP.NET applications: this one mentions the danger of too many threads being spawned by each HTTP request).

    Threads may threaten to leave an object in an inconsistent state; they may cause unexpected and/or unpredictable change of a state; they may disrupt the timing for some kind of time-critical operation; they consume the CPU time and RAM (at least, by creating handles)... but I'm sure you don't need me, or anyone else, to recount those things: they are, after all, common knowlege.

    So, once again, there has to be a definition of "thread-danger" to help us choose and formulate the appropriate definition of "thread-safety". And I say "appropriate" because, as you say, there certainly, *always* is a definition for it, and we may intuitively "feel" it in its entirety, but need to narrow it down to the case at hands.

  • @  Random832,

    Then again: upon reflection, there *is* a set of requirements any code that may be executed in a multi-threaded environment has to conform to. For example, leaving no object in an inconsistent state: I cannot, from the top of my head, think of an example where you just wouldn't care about the consistency of an object's state, no matter how temporary and insignificant this object is. And this, of course, comes on top of the obvious "cause no crash" and "leave no rubbish, like 'loose' handles and the corresponding system objects, behind".

    So it looks like there is, if not a complete definition, then at least a minimal set of requirements, for thread-safety. But the rest still depends on the specific situation.

  • I have my own notion of thread safety - it is not precise and I don't know how well it matches up with other definitions.

    To me thread safety boils down to a question of what happens to shared resources (shared meaning visbible across multiple threads), whether the resource is an object, a chunk of memory, a handle to a file on disk, etc. it also is further restricted (in the simple case) to modifications made to these resources, which means that readonly access to memory is always thread safe since the state of the object does not change. It's not this simple when the shared resource is a chunk of code, such as an interrupt handler, where simply executing the code (for example, reading a value from an I/O port) causes side-effects.

    An additional aspect of thread safety is ensuring that when changes to an object get published , or made visible to other threads, the changes are all visible simultaneously. For example, when changes are made to 3 different fields of an object, the code needs to ensure that ALL the changes are visible at the same time, so that all 3 fields are seen, not some partial mixture of fields, so that the object is always in a consistent state.

    Some effects that need to be taken into account are compiler-related, such as ensuring that variables that may change asynchronously to one thread do not get hoisted out of loops such that the change to the variable by another thread would not get noticed. There are keywords like "volatile" to help with some of this.

    There are many other thread-related issues too, such a deadlocks, livelocks, priority inversion, etc. that are indirectly related to thread safety.

    So to me, calling something thread-safe has definite meaning, and I have a mental checklist I run through when examining code to determine if it is "thread safe". It's not as simple as putting mutexes around all access to the object.

  • Much as Godwin's Law predicts the eventual invocation of Hitler in any sufficiently long internet discussion, I'd like to propose an axiom in the same sociological vein.  There is a correlation between A, the number of commenting readers of a blog that are of an analytical (not to say hairsplitting) bent, and B, the likelihood that any blog post that mentions "beg the question" will get more responses to this controversial point of usage than to the original intended topic. (A for analytical, B for beg.) I'd say that on average, you need about 3 such readers for a 50% likelihood and perhaps 6 for 95%.  

    Relatedly, the number of grammarian posts is probably big omega of n^2, where n is the number of such readers: each such reader tends to respond to every other at least once.  This far outstrips the usual number, which is, what, n log n, maybe?  That is, the rate of response per reader would be log n.  Or perhaps it would be "blog n".  

Page 2 of 2 (19 items) 12