An annoying and non-obvious rpmbuild “feature enhancement”

Specifically, under certain circumstances, it can dump debuginfo files into /usr/lib/debug and /usr/src/debug under your buildroot, neglect to build the corresponding -debuginfo package, and then have the gall to complain about the unpackaged files it dumped there.

I have a confession to make: I’m anal-retentive enough about the systems I administer where I need to build RPM packages for everything so they can be easily updated, but I’m lazy enough where I usually just grab source RPMs out of the most recent Fedora repositories and modify the specfiles until they work on CentOS 5. This can lead to some interesting issues, because RPM and rpmbuild are not quite the same in CentOS as they are in Fedora. Sometimes you’re never quite sure if something is a bugfix or a feature enhancement, and this was one of those lovely times.

This week, I got a request from a user to build a more recent version of gnuplot than the 4.0 version that ships with CentOS 5. Simple enough, right? I took the F13 SRPM, bumped the underlying source tarball to 4.4.1, made a couple of config fixes for the distro change and version bump, and then fired up Mock to build it for CentOS. It would build successfully, and the RPM packaging would bomb out with errors like the following in Mock’s build.log:

It took me two to three days of looking at this issue off and on to determine that the problem was related to a single innocuous line buried deep inside the package spec:

BuildArch: noarch

Interestingly, this was on a subpackage, which apparently is enough to trip up rpmbuild for all packages listed in the spec.

After removing that line, the -debuginfo was generated fine.

In short: on older rpmbuild versions, don’t build arch-specific binary packages that have noarch subpackages. This does work fine on newer rpmbuild versions.

Hope this helps somebody, somewhere.

Linux vs. Solaris packaging: it’s a philosophical thing

I thought this was a post worth making because this was the hangup that kept me, as an eight-year Linux user, from really getting Solaris.

One of the biggest questions I see repeated all across the Internet is, “why can’t Solaris’s package management be more like Linux?” Criticisms abound both of Solaris’s SysV packaging format and the way that Solaris packages have to be installed. Solaris’s opponents claim that the Linux packaging system is far superior, Solaris’s is stuck in the 20th century, and Solaris has to adapt or survive. OpenSolaris introduced the Image Packaging System (IPS), designed by Ian Murdock, the founder of the Debian project, largely to bring many of Solaris’s detractors back into the fold by providing another way of doing things. But how much difference does it make in the long run for Solaris as a platform?

Many of the questions and doubts about the Solaris packaging model stem from a very Linux-centric way of functioning. What I would like to explain is why the impedance mismatch between Linux and Solaris packaging is not so much a technological divide as it is a philosophical one.

I’m going to start by explaining how FreeBSD does things, because I think it fits neatly right in the middle of the Linux and the Solaris way of managing installed software.

A FreeBSD installation consists of two discrete platforms. The first is the base system, which is a set of system binaries and core services like FTP, NTP, DNS, DHCP and SMTP software. These are considered to be part of the operating system; they are managed by the installer and updated when you update the OS to a new release. The base system is installed under /usr, and other programs not part of the operating system should not be installed there.

The second is the third-party application layer, which consists of binary packages and “ports,” which are instructions for how to build an application from source. You might compare it to Gentoo’s portage system, or maybe to building all of your Red Hat packages from source RPM. The ports system goes beyond a simple “./configure && make && make install” in that it provides automatic dependency resolution, nice GUI interfaces to common compile options, installation registries and pre/post install/uninstall scripts the same way that a binary package manager would. Packages from the third-party ports/packages system are installed under /usr/local, separate from the base system.

The goal of this system is to keep the two layers as orthogonal as possible, meaning that it limits the surface area where they touch. The base system, for example, contains a copy of OpenSSL. But if you build an application in the ports tree, it will pull in its own copy of OpenSSL that will be used by the programs in /usr/local. The idea is that if you keep the two layers as separate as possible, you can upgrade the underlying system trivially without worrying about all of your third-party dependencies breaking on you. You can also keep your third-party programs from breaking your OS upgrade. And unlike in Linux, if you rely mostly on vendor-supplied libraries, it’s still very easy to install very modern software on a not-very-modern version of your OS.

In Linux, the solution to a major operating system upgrade is to back up your important data files, reformat your partitions and create your system from scratch on the new operating system. This is fine for systems of trivial complexity, but becomes very burdensome when you have an enterprise product like an ERP system or a digital collections manager and you would really, really like to just be able to upgrade the OS without everything breaking on you. One of the obnoxious idiosyncrasies of Linux is that when you go to upgrade, your vendor’s new packages may conflict with something in a third-party RPM you’ve installed. Third-party software can actually break your ability to upgrade the base system because everything shares the same hierarchies and you may encounter a lot of unintended conflicts.

Solaris’s packaging system has historically been the SysV package, which provides dependency resolution and many of the other amenities of modern packaging systems, but there was never a delivery mechanism for simple Internet- or network-based delivery. Many organizations NFS mount a directory full of packages. In many ways, it’s closer to Slackware’s idea of packages than most modern formats like .rpm or .deb. Blastwave was the first community organization to bring Internet-based package management, complete with automatic dependency resolution, to Solaris, but it did so with its own packages, not by touching the base system.

Solaris takes the FreeBSD approach to a more extreme level, partly out of fragmentation and partly out of necessity. Third-party packaging groups like Blastwave and SunFreeware operate independently of one another. Because of this, rather than a /usr vs. /usr/local separation, each Sun packaging group basically builds its own platform, isolated in its own directory hierarchy. Blastwave uses /opt/csw, SunFreeware uses /usr/sfw, and the old Cool Stack suite of web stack packages (which is now part of the Glassfish Web Stack) resides in /opt/csk.

The consequence of this approach is that if you, as an internal packager producing packages for your organization, want to take a piece of software and make a SysV package out of it, you need to build the platform underneath it first. It’s not as simple as writing an Apache package, because you need to rely on your own complex hierarchy of libraries too. When you’re now maintaining 40 packages instead of the 1 you really wanted to build, it becomes simpler to just rely on rsync from a reference system instead. And if you’re running OpenSolaris in production (and there are lots of perfectly valid reasons to do so), you probably don’t want to rely too heavily on vendor-supplied packages because the distribution is a moving target that changes dramatically every six months.

In many environments, the orthogonal-platform approach isn’t a bad thing. You’re probably dealing heavily with change control in the enterprise anyway, and it’s nice to not have to worry quite as much about a Solaris patch bringing down critical system services. Visible Ops teaches us that the most highly-available IT organizations patch far less frequently and rely more on good release management processes and testing updates in a group. Essentially, in a highly change-controlled environment, you’re essentially going to be building your own distribution, whether that involves rsyncing out Solaris binaries or manually creating well-tested update channels in a Red Hat Network Satellite server. And as with FreeBSD, when you need to perform a major OS upgrade on a highly complex system, it dramatically reduces the chances that something is going to break as a result of the vendor’s updates.

In many other scenarios, it is a bad thing. Many server configurations are very simple — LAMP stacks or Mailman servers, for instance — and you don’t need to put the same effort into maintaining them that you would an ERP or CRM system, a single sign-on portal or other important enterprise services. If the system breaks horribly, it can be rebuilt very easily. For the majority of organizations, most systems are like this, and the ability to very quickly bootstrap a system with needed services is still a big draw to the enterprise consumer. And from a security perspective, keeping four different copies of a library on your system, that are all used by different programs, means that there are four times as many security updates to make, and four times as many chances to let something slip through the cracks. Often it means several different configurations to maintain. For this reason, many organizations ignore Blastwave entirely. (Lots of others spurn third-party packages entirely out of security concerns, quite understandably.)

Linux attempts to create an all-inclusive platform where all software is on the same playing field, so to speak. Third-party packages rely on system libraries in the same way that the vendor’s packages do, for better or for worse, and everything benefits from (or breaks from) updates to system packages. For minor updates, this is a great thing. For major updates, this prevents the majority of systems with sufficiently complex configurations from ever being able to perform an in-place upgrade. The downside is mitigated a little bit by the fact that the package management system makes it quite a bit easier to get the new system up and running again.

But what makes Linux special among these three approaches is that there’s absolutely nothing keeping you from designing your own isolated platform using your own dependencies, just like you would on BSD or Solaris. BSD and Solaris try to enforce this separation, while Linux gives you enough rope to hang yourself with if you’re so inclined.

There’s perfectly valid reasoning for all of these approaches, and I don’t think it’s a bad thing that administrators are able to pick which platform to use based on the situation. It’s important to remember that Solaris isn’t lagging in the 20th century — it’s just a grizzled war veteran who understands the realities of enterprise IT administration.

Linux fails to escape screensaver malware

Screensavers, smiley packs, little animated desktop companions and their ilk have, for a very long time, been a big part of the Windows malware ecosystem, because they’re the kind of thing that specifically appeals to the type of user who doesn’t know any better. For awhile, Linux has managed to avoid this, but a screensaver on gnome-look.org has been found to do very bad things:

Malware has been found hidden inside an innocuous ‘waterfall’ screensaver .deb file made available on popular artwork sharing site Gnome-Look.org.

The .deb file installs a script with elevated privileges designed to perform a DDoS attack as well as keep itself updated via downloads.

The dodgy screensaver in question has since been removed from gnome-look and this incident was a very basic, if potentially successful, attempt.

If anything this incident highlights the need to be careful what you download and where you download it from.

Nothing new in the Windows world, of course, but a pleasant reminder that Linux intrinsically do anything to prevent users from doing stupid crap.

Recording disk statistics with sysstat on RHEL/CentOS

Unlike on Debian-like systems, the default configuration for sysstat’s sa1 collector on RHEL/CentOS does not include disk statistics (like you would get from iostat) in the sa collection output. This is due to a missing flag in the cron.d fragment that calls sa1. The “-A” flag to sa1 defies reasonable assumption about its function, and does not include disk statistics, so we have to specify “-d” manually.

To enable disk statistics collection/trending, edit /etc/cron.d/sysstat and change the following:

*/10 * * * * root /usr/lib64/sa/sa1 1 1

to this:

*/10 * * * * root /usr/lib64/sa/sa1 -d 1 1

(Obviously, replace “lib64” with “lib” as appropriate for i386 systems.)

Either wait for the next sa log rotation (at midnight) for sa1 to begin collecting disk statistics, or delete your current day’s statistics. sa1, for whatever historical reason, does not add new counters to an existing sa log file.

Fedora 12 allows users to install signed packages…

Update: According to a post on lwn that I can’t find at the moment, they’ve already reverted this decision with a subsequent update. It should be resolved soon.

…without root privileges, without authenticating.

Yeah, you read that right. SANS has the writeup:

A “bug” created back in November against the latest Fedora release (12) indicates that, through the GUI, desktop users of the Fedora system are able to install signed packages without root privileges or root authentication.  Yes, you just read that correctly.  (I’ll give you a second re-read that sentence so I don’t have to retype it.)  Yes, “it’s a feature, not a bug”.

In all my travels I’ve only ran across one company, ever, that has Fedora rolled out as an enterprise operating system on every desktop.  But what kind of security implications does this have?  I obviously don’t have to explain why this is (may be) a bad idea to the readers of the ISC, as we are all security minded people.

Now, the restrictions.  This change does not affect yum on the command line.  This only affects installing things through the GUI.  (Not that helps any, as most users will be running the GUI anyway.)  You can also disable it.

Currently in the bug, there is some debate about if they should revert this feature.  So, this may be just temporary.

I’m sure this shouldn’t affect most people’s real deployments of anything, since Fedora has always been something of a moving target and has, in my experience, been completely unsuitable for widespread deployment in an organization for a wide variety of other reasons. But just because it’s not appropriate for enterprise customers doesn’t mean that desktop users have nothing to worry about.

That’s because this extends the attack surface for malicious intruders by a really impressive amount. By allowing users unauthenticated access to play with the package manager, you create a nearly infinite attack surface for anyone looking to obtain a local privilege escalation on the system. Imagine this: you don’t need to exploit any one specific system service, because once you find a hole in something, anything at all that can be targeted in a default out-of-the-package configuration, you can install it and exploit it.

I’m not 100% aware of the implications of how this is designed — I may be fundamentally misunderstanding something that’s going on in the back end, and this may not be a Really Bad Thing. But imagine this: someone finds a bug in Firefox, or Flash, or Java. They exploit it to gain the ability to run arbitrary code under the user’s account. They can now silently install  Cfengine, Puppet, Bcfg2, or another root-configured service in the background using PolicyKit. They then attempt to exploit these services, which shouldn’t be running in the first place, and if they succeed, suddenly they have root access to do whatever they want.

Let me slip on my tinfoil hat for a minute: say some minor package maintainer gets through Fedora’s release engineering processes, and under the radar, slips a surreptitious backdoor into a package that only a handful of people use and nobody really keeps their eyes on. Where previously the damage might be so localized, from the package’s disuse, to be pretty much useless, now that package can be slipped into anyone’s system at will through a local unprivileged user exploit.

SELinux mitigates this, absolutely, and unlike in Debian, most important things won’t start by themselves until they’re explicitly enabled by the administrator. But the back door is there even if it’s locked, it’s only a matter of time until someone finds a real-world way to abuse this in very bad ways, and I really wish they would seriously consider reverting this behavior to something a bit less dangerous. This could be a very useful tool in a corporate environment, but the way I understand the situation right now, it’s a very bad default.

More on CentOS 5.3 to 5.4

So, here’s a humbling, humiliating and slightly funny follow-up to my last blog post:

I’ve always done my due diligence in making sure upgrades go smoothly. As a result, I have a habit of tirelessly poring over release notes and the “known issues” section therein. However, I got burned this week when I failed to read all of the release notes.

CentOS has a documentation page for the 5.0 series. And as of this writing, the documentation page links to a document called Release Notes. It does not, however, link to a completely different document that also is called Release Notes. I had read the release notes on the documentation page, but not the CentOS-specific release notes document which was only linked from the front page. I suppose it’s my fault for not noticing that 5.0 through 5.3 all have CentOS release notes links pointing to the wiki, and thinking that the wiki might be a good place to look.

Upon asking about my upgrade issues, the always-helpful folks in #centos berated me for not Googling correctly for the release notes, accused me of trolling when I pointed out that I did find (and read) the release notes but that there was a documentation problem, and asked me why I would dare to criticize the free efforts done by a volunteer in maintaining the documentation. Obviously, after finding the only document called “Release Notes” listed under CentOS’s documentation for 5.4, on the page where this documentation would normally be, the perfectly reasonable, thinking man’s approach to the problem would be to Google for CentOS release notes.

After much soul-searching and reflection, and a few minutes spent filing a bug report about the documentation page, I did find the answers I was looking for in the CentOS-specific release notes tucked away on their wiki:

  • CentOS 5.4 includes glibc and kernel updates. For yum updates the recommended procedure is:

yum clean all
yum update glibc\*
yum update yum\* rpm\* python\*
yum clean all
yum update
shutdown -r now

So, here’s the morals of the story:

  • If you try to run the whole upgrade at once using yum upgrade, there is a good chance that you will break your system going from 5.3 to 5.4. Follow the documentation, and update your packages in the order given above, and you should be just fine.
  • If you think you’re missing an important piece of documentation, you probably are.

Did you ever have one of those weeks where everything you learned seemed to be choreographed into place? I think that I’m learning much broader lessons this week about the nature and the danger of assumptions, as the Lone Sysadmin would tell you about me. (Bob Plankers, it turns out, is very much not a “goon,” and one can make a very big ass of themselves by assuming other people are familiar with the other meanings of such a word.)

CentOS 5.3 to 5.4 upgrade woes

I’ve been pushing out CentOS 5.4 on a number of test systems this week, and I came upon a very interesting, very insidious, and very annoying problem.

When running the upgrade, yum upgrade seems to run without a hitch, and returns completely successfully with no errors or warnings. However, what actually happens in the background is that the cleanup process breaks silently, and the package manager gets completely filled up with entries for duplicate packages that shouldn’t be allowed to coexist. I was alerted to the problem by rkhunter, which notified me during its post-reboot run that several files were mismatched versus what the package manager thought they should contain. Now, if you rpm -qa a package with matching versions installed, the order they come back is arbitrary and depends on how they end up in your RPM BDB database. When rkhunter called rpm –verify, it was running against the older version, which was failing the checksum comparison.

The number of package errors that rkhunter actually caught paled in comparison to the huge number of screwed up package entries on the system.

This usually doesn’t cause a problem. In most cases, if the cleanup portion fails, you can just run yum-complete-transaction and it will pick up where it left off. For whatever reason, this doesn’t work here.

After hitting this problem, if you try to run another update, you get output like this:

I cooked up a hairy one-liner to find the duplicates:

rpm -qa --queryformat="%{name}.%{arch}\n" | sort | uniq -d | perl -ne 's/(.*)\.(.*)/\1/g; print' | xargs rpm -qa --queryformat="%{name}-%{version}-%{release}.%{arch}\n" | sort

(It’s only so long because you need to match the arch on x86_64, and rpm -qa doesn’t play nicely with packagename.arch-format names. Interestingly, though, I’ve only experienced the problem on the i386 servers that I’ve upgraded.)

Here’s the output on one host following a supposedly successful upgrade:


You need to go through these and remove the outdated package versions, one by one. (If you’re confused about which is newer, you can run rpm -qi <packagename> and see, among other details, the date the package was built.) This should be a safe operation; the package manager reference-counts files, and won’t remove a file belonging to multiple packages until all of those packages have been removed, even though you should never have multiple packages owning the same file in the first place. I’m fairly sure that removing these packages manually shouldn’t trigger the %postun scripts, and that the package manager will figure out that removing one version while you have a newer one installed means it’s an upgrade instead of an uninstall. If you’re worried, though, you can do an rpm -e –justdb to remove only the package database entries for the files while not running the scripts or actually removing any files.

Following the removal of the stale packages, a yum -y upgrade should fix the remaining issues.

It’s important to note that the packages do all upgrade — running an rpm –verify on the package after removing the old version does not result in any checksum mismatches or any other visible strangeness. The old versions simply don’t get removed from the package manager, which wreaks havoc on your dependency graphs.

I don’t know what’s causing the problem, but I think it might have something to do with where the upgrades to rpm/yum are placed in the middle of the transaction. Will report back after the next batch of updates, in which I will update rpm and yum first before proceeding with the remainder of the upgrade.

awesome WM Appreciation Post

Multiple workspaces always made me kind of sloppy as a user when they were supposed to make me more organized. I would open up dozens of tabbed terminals and lose track of some of them. Eventually I spent enough time looking for windows I should have had right in front of me, and closing windows that shouldn’t be closed, that I figured there had to be a more productive way out there.

I’ve flirted with tiling window managers before. Ion, wmii and xmonad always left a bad taste in my mouth. They were inflexible and annoying and difficult to use with any workflow that wasn’t 100% centered around the terminal. God help you if a program needed to open a dialog window.

awesome is a lightweight tiling window manager with very good support for floating windows, without having to mess with separate layers for tiled or floating windows. It has full support for multiple displays through XRandR, it seems to be a lot faster than Xmonad and other tiling WMs, and it’s freaking tiny — the full source, including wallpapers, is just over 300k. It is very flexible with tagging and per-screen workspaces, support for extensive configuration options through Lua, and a great user community. It also has a number of really nice mouse-driven features, like resizing your layout by holding the meta key and right-dragging. A lot of tiling WMs drive me nuts because you have to memorize 45 keyboard shortcuts in order to use them effectively.

I’ve been a KDE user since 2002, with occasional flirtations with other desktop environments. awesome made me switch straight-out. At the moment, I’m running a pared-down Xfce setup, with xfwm4 swapped out for awesome. The other utilities, like the Xfce session manager and panel, are still useful. (The one thing that drives me nuts is that awesome takes over my systray. I would really like it in my Xfce panel.)

Give it a shot. It’s worth the learning curve.

Obligatory screenshot is forthcoming.

© 2019 @jgoldschrafe

Theme by Anders NorenUp ↑