Thursday, March 4, 2010

Coding styles that make me twitch, part 8

One of the worst programming habits I know of, is assuming that everything will work out; you just perform the command/function call, and it has to work. In other words: do not bother checking for error conditions or propagating them, because they are not going to happen anyway. right?


But the code works. Mostly. Except when an error condition occurs. And it will, eventually.

In some instances, such code is a glorified shell script. I have ranted about that before. But catching error conditions properly can be tricker than with in-Perl functions and modules. Especially if you have no control over what the external program does, but the original programmer did.

Even with in-Perl functions and modules, you might run into, ehrm, interesting usage.

Usually, the root of the problem is that the original programmer did not foresee that some his code could grow into something else.

While it may have been a good idea to create an SQL query wrapper to hide parts of what DBI does for auto-committing statements:
sub do_sql {
my $qry = shift;
my $ignore = shift;
if (!$dh) {
return if (!$dh);
my $sh = $dh->prepare($qry);
if ($sh && !$sh->execute) {
if ($sh->errstr =~ /(MySQL server has\
gone away)|(Lost connection to MySQL server)/) {
&log("Lost MySQL, reconnecting.");
return if (!$dh);
$sh = $dh->prepare($sqlstmt);
undef $sh if ($sh && !$sh->execute);
} else {
undef $sh;
&log("SQL failed: '$qry'.") if (!$sh && !$ignore);
return $sh;

that is no guarantee that the implementation will be future-proof, when the subroutine is used like this:
my $sh=&do_sql("SELECT * FROM tab1 WHERE\
x<>'$cgi->{data}' AND y<>42");
if (my $res=$sh->fetchrow_hashref) {
&do_sql("UPDATE tab1 SET x='$cgi->{newdata}',\
y=23, z='$cgi->{ref}'");
&do_sql("UPDATE tab2 SET tab1_changed='yes'");
&log("Updated tab1, ready for externals");
# Process the changes made above:
&log("Finished processing");

Imagine now that also performs changes in the database tables mentioned above, and that the code above is called in a cron job every minute or so.

Here is an attempt at listing the worst parts:
  • do_sql only pretends to do proper error checking, it mostly does not do anything useful about the error situations.
  • The query result seems of little consequence.
  • There is an obvious need for bound variables in the prepared statement, but do_sql does not support that. So the code pretends the problem does not exist.
  • When do_sql is used for updates, there is no check whether the returned statement handle ($sh) is empty or not, there is no way of knowing whether we are clobbering the database.
  • Why do we not care whether the external commands succeeds or not?
  • SQL transactions, anyone?
  • Yeah, other parts of the style sucks, too.
Now imagine the example above multiplied to thousands of lines of inter-dependant code.

I am happy to say that I do not see things like this too often.

However, cleaning up code like this is a PITA, and it is often easier just to close your eyes, add your own code, and leave well enough alone.