I've never been a fan of SELinux, in the context of what a "normal" GNU/Linux install looks like.
I've only just managed to put into words what my misgivings are after reading this article: it feels like anti-virus software. The trouble is, it's bolted-on security. It's trying to contain software which wasn't written to a strict boundary, to a strict boundary. So you start with a crappy boundary of existing insecure software. That doesn't really achieve much - it prevents expansion of each process' role, but it's already a huge boundary most have.
It makes more sense in the context of "fresh" Linux OS software, e.g Android, but that's exactly where a strict policy from the start, like seccomp, would have done the job.
I think the article misses that there's a third way: subdivided software written with strict roles and boundaries in the first place. Hence why I classify this as "anti-virus" - its enforcement only kicks in after compromise. Prevention is better.
t's trying to contain software which wasn't written to a strict boundary, to a strict boundary.
Exactly. That's why NSA, decades ago, wrote SELinux. It wasn't intended to be a security measure. It was intended to encourage development of user-space software which lived within strict security limits.
That never happened. The desire for loopholes ("must phone home", etc.) beat security restrictions. All a single-player game really needs is read access to its own assets, input from user input devices, output to graphics hardware and sound, and the ability to write in its own preferences/save directory. Try to find a commercial game which will run under such restrictions.
SELinux basically implements the concept of "trusted computing". This is a military term, and its about how the generals in pentagon can trust a computer in the field to not "leak" sensitive data (like how mismanaged the war is). This be it to the enemy or ones own soldiers.
Later big media would embrace the term as an alternative to DRM.
Effectively SELinux treats anything and anyone as a potential attacker, including the owner and user on the computer it is installed on.
And the reason it "makes sense" on Android is that the owner of the device is not the owner of the OS, that is the OEM, carrier and ultimately Google.
I've only just managed to put into words what my misgivings are after reading this article: it feels like anti-virus software. The trouble is, it's bolted-on security. It's trying to contain software which wasn't written to a strict boundary, to a strict boundary. So you start with a crappy boundary of existing insecure software. That doesn't really achieve much - it prevents expansion of each process' role, but it's already a huge boundary most have.
It makes more sense in the context of "fresh" Linux OS software, e.g Android, but that's exactly where a strict policy from the start, like seccomp, would have done the job.
I think the article misses that there's a third way: subdivided software written with strict roles and boundaries in the first place. Hence why I classify this as "anti-virus" - its enforcement only kicks in after compromise. Prevention is better.