As you might expect, some programmers think the features documented in
the top ten ways to be screwed by C are
desirable.
ddyer:Sorry, no sale here. If there are no interaction side
effects due to the order of evaluation, then it doesn't matter what
the compiler chooses.
If there are potentially some interactions, then a sufficiently smart
compiler of the opposite persuasion, or a sufficiently dumb compiler
of the same persuasion, would produce different results.
Doing evaluation one way or the other is absolutely fundamental - it defines
what the correct result should be. You're saying the compiler ought to be
allowed to decide? Bah!
Consider the traditional
All four arguments are being evaluated at once;
the mere fact that we expect that a given processor will have to evaluate
them in a specific order should not bias us to assume this is "correct".
I would be quite happy with a compiler which responded with (for i starting
at 0) "0, 0, 0\n", and set i to 1.
When I wrote a C-like language, the result would have been "1, 0, 0", and
i would have ended up 2. Or it would have been "0, 1, 4", and i would have
been 2. Depending on flags.
ddyer:
It a question of who is writing the program; you or the compiler. Compilers
are there to jump through hoops.
Compilers are free to evaluate everything at once if they can determine
that the result is semanticly correct. Languages that are designed
for parallelism give some assistance in this matter.
In fact, the reason it's undefined in C is nothing like as advanced a
concept as supporting parallelism; It's actually related to what
is the most convenient way to build a stack frame on pdp-11s.
ddyer:
I hope you don't write software that controls the planes I fly
in. :) Seriously, your conception of efficiency is misguided. The
scarce resources are good programmers and correct code, not computer
cycles.
ddyer:
Because, if the compiler gets to choose, then next week, or next year
your program will produce different results without your consent.
It's bad enough when it happens for any reason; it shouldn't be a feature
of the language.
Enough. I don't expect to convert you.
ddyer: I agree with that, but I prefer to deal with the problem
by defining the result. I could live with your point of view if the
compiler enforced the restriction against using undefined results; or
in this case, enforced the restriction that arguments to functions be
free of side effects.
Take an ordinary function call,
This kind of thing is a yawning chasm waiting for someone to fall
into.
I agree that it is a bad idea for a and b to communicate in
non-obvious ways, and for programmers to deliberately take advantage
of obscure dependencies; that isn't the issue. The issue is if
the language punishes poor practice by randomizing the results.
About order of evaluation
(bug #7 in ways to be screwed by "C")
Respondent S: I'd argue that many of these aren't truly
bugs; they're pitfalls, but there are good reasons for them.
(Especially the undefined order of argument evaluation. I can see no
good reason to define it either way; doing so will break a smart
enough compiler, which could otherwise evaluate several complicated
things at once, on a parallel machine.)
Respondent S:
I am saying that it would be best for such things to be undefined, just like
i=++i; such a statement has no sane meaning. I think it is correct for the
language to not specify the behavior of things that don't make sense; after
all, you can explicitly force ordering of anything you need a specific
ordering of.
printf("%d, %d, %d\n", i++, i++, i * i);
Why should the compiler be forced to jump through hoops to favor the
naive interpretation?
Respondent S: I am in favor of undefined
behavior; I have yet to see undefined behavior which really
should be defined; there's always room for examples where one
ordering or another is "better". Given that, yes, the compiler
should choose. At random, or for efficiency, or however it
wants.
Respondent S:
I suppose I feel the job of the compiler is to produce code. I see no
obvious semantics to function calls that imply order of evaluation.
Ditto for 'i=++i' or 'i=i++'. I just don't see any obvious meaning,
so I don't see why the compiler should be bound to a specific one.
Respondent S:
I guess, my thinking was that a program in a language, that doesn't specify
a result, should never depend on that result.
Respondent M:
If a language defines things like order of evaluation, interaction of
side effect, etc, then programmers will start to rely on these
features, which might produce harder-to-understand programs.
You don't know any such thing, you just hope it's true,
under the extremely dubious proposition that the programmer who wrote
f never made mistakes, and anyone who subsequently changed a and
b could find all possible callers and check them for things like
this.
f(a(foo),b(bar))
When I see such a program fragment (and assuming the program doesn't
rely on compiler specifics), I know that the order of
evaluation is not important, and hence that a() and b() don't
communicate via global variables.
comments/suggestions to:
ddyer@realmy home page