How to destroy FOSS from within - Part 3

This is the third installment of the article.
In case you missed it, these are part one and part two.




In the previous post, I stated that the direction in an FOSS project is set by two groups of people:
a) People who work on the project, and
b) People who are allowed to work on the project.

We have examined (a) in part two, so now let's examine (b).

Who are the people allowed to work on the project? Aren't anyone allowed? The answer is a solid, resounding, "NO".

Most FOSS projects, if they are contributed by more than one person, tend to use some sort of source code management (SCM). In a typical SCM system, there are two classes of users: users with commit rights, who can modify the code in the SCM (committers), and read-only users, who can read and check-out code from the SCM but cannot change anything.

In most FOSS projects, the number of committers are much smaller than the read-only users (potentially anyone in the world with enough skill is a read-only user if the SCM system is opened to the world e.g. if you put the code in a public SCM repository e.g. github and others).

The committers don't necessarily write code themselves. Some of them do; some of them just acts are a "gatekeeper"; they receive contributions from others; vet and review the changes; and "commits" them (=update the code to the SCM) when they think that the contribution has met certain standards.

Why does it matter? Because eventually these committers are the ones that decide the direction of the project by virtue of deciding what kind of changes are accepted.

For example, I may be the smartest person in the world, I may be the most prolific programmer or artists in the world; if the committers of the project I want to contribute don't accept my changes (for whatever reason); then for all practical purposes I may as well don't exist.




Hang on you say, FOSS doesn't work this way. I can always always download the source (or clone the repo) and work it on my own! No approval from anybody is required!

Yes, you can always do that, but that's you doing the work privately. That's not what I mean. As far as the project is concerned, as far as the people who use that project is concerned; if your patches aren't committed back to the project mainline, then you're not making any changes to the project.

But hey hey, wait a minute here you say. That's not the FOSS I know. The FOSS I know works like this: If they don't let me commit this large gobs of code that I've written, what's stopping me to just publish my private work and make it public for all to see and use? In fact, some FOSS licenses even require me to do just that!

Oh, I see. You're just about to cast the most powerful mantra of all: "just.fork.it", is it?




I regret to inform you that you have been misinformed. While the mantra is indeed powerful, it unfortunately does not always work.

Allow me to explain.

Fork usually happens when people disagree with the committers on the direction they take.

Disagreement happens all the time, it's only when they are not reconcilable that fork happens.

But the important question is: what does the forking accomplish in the end?

Personally, I consider a fork to be successful if it meets one of two criteria:

a) the fork flourishes and develops into a separate project, offering alternatives to the original project.

b) the fork flourishes and the original project dies, proving that the people behind the original project has lost their sights and bet in the wrong direction.

In either case, for these to happen, we must have enough skilled people to back the fork. The larger the project, the more complex the project, the more skilled people must revolt and stand behind the fork. It's a game of numbers; if you don't have the numbers you lose. Even if you're super smart, you only have 24 hours a day so chances are you can never single-handedly fork a large-scale project.

In other words, "just.fork.it" mantra does not always work in real world; in fact, it mostly doesn't.

Let's examine a few popular works and let's see how well they do.

1. LibreOffice (fork of OpenOffice). This is a successful fork, because most of the original developers of OpenOffice switch sides to LibreOffice. The original project is dying.

2. eglibc (fork of glibc). Same story as above. Eventually, the original "glibc" folds, and the eglibc fork is officially accepted as the new "glibc" by those who owns "glibc" name.

3. DragonflyBSD (fork of FreeBSD). Both the fork and the original survives; and they grow separately to offer different solutions for the same problem.

4. Devuan (fork of Debian). The fork has existed for about two years now, the judge is still out whether it will be successful.

5. libav (fork of ffmpeg). The fork fails; only Debian supported it and it is now dying.

6. cdrkit (fork of cdrtools). The fork fails; the fork stagnates while the original continues.

7. OEM Linux kernel (fork of Linux kernel). There are a ton of these forks, each ARM CPU maker and ARM boardmaker effectively has one of them. They are mostly failed; the fork didn't advance beyond the original patching to support the OEM. That's why so may Android devices are stuck at 3.x kernels. Only one or two are successful, and those who are, are merging back their changes to the mainline - and eventually will vanish once the integration is done.

8. KDE Trinity (fork of KDE). It's not a real fork per se, but more of a continued maintenance of KDE 3.x. It fails, the project is dying.

9. MATE desktop (fork of GNOME). Same as Trinity, MATE is not a real fork per se, but a continued maintenance of GNOME 2.x. I'm not sure of the future of this fork.

10. Eudev (fork of systemd-udev). The fork survives, but I'd like to note that the fork is mostly about separating "udev" from "systemd" and is not about going to separate direction and implementing new features etc. Its long-term survivability is questionable too because only 2 people maintain it. Plus, it is only used by a few distributions (Gentoo is the primary user, but there are others too - e.g. Fatdog).

11. GraphicsMagick (fork of ImageMagick). The fork survives as an independent project but I would say it fails to achieve its purpose: it doesn't have much impact - most people only knows about ImageMagick and prefers to use it instead.

I think that's enough examples to illustrate that in most cases, your fork will only **probably** survive if you have the numbers. If you don't, then the fork will either die off, or will have no practical impact as people continue to use the original project.

In conclusion: The mantra of "just.fork.it" is not as potent as you thought it would be.

As such, the direction of the a project is mostly set by committers. Different projects have different policies on how committers are chosen; but in many projects the committers are elected based on:
a) request by the project (financial) sponsor, and/or
b) elected based on meritocracy (=read: do-ocracy) - how much contribution he/she has done before.

But remember what I said about do-ocracy?

Posted on 14 Nov 2017, 23:57 - Categories: Linux General
Comments - Edit - Delete


How to destroy FOSS from within - Part 2

This is the second installment of the article. In case you missed it, part one is here.




In the past, companies try to destroy FOSS by disreputing them. This is usually done by hiring an army of paid shills - people who spread hoaxes, misinformation, and self-promotion where FOSS people usually hang around (in forums, blog comments), etc. This is too obvious after a short while, so the (slightly) newer strategy is to employ "unhelpful users" who hangs around the same forum and blog comments, pretending to help, but all they do is to shoot down every question by embarassing the inquirer (giving "oh noobs questions, RTFM!", or "why would you want to **do that**???" type of responses, all the time).

Needless to say, all these don't always work (usually they don't) as long as the project is still active and its community isn't really filled with assholes.

In order to know how to destroy FOSS, we need to know how FOSS survives in the first place. If we can find lifeline of FOSS; we can choke them and FOSS will inevitably die a horrible death.

The main strength of FOSS is its principle of do-ocracy. Things will get done when somebody's got the itch do to it; and that somebody will, by virtue of do-ocracy, sets the direction of the project.

The main weakness of FOSS is its principle of do-ocracy. Things will get done when somebody's got the itch do to it; and that somebody will, by virtue of do-ocracy, sets the direction of the project.

The repeated sentence above is not a mistake, it's not a typo. Do-ocracy is indeed both the strength and the Achilles' heel of FOSS. Let's see this is the case.

Direction in an FOSS project is set by two groups of people:
a) People who work on the project, and
b) People who are allowed to work on the project.

Lets examine (a).

Who are the people who work on the project? They are:
1) People who are capable of contributing
2) People who are motivated to contribute




Let's examine (1).
Who are the people capable of contributing? Isn't everyone equally capable? The answer is, though may not be obvious due to popular "all people is equal" movement, is a big, unqualified NO. People who are capable of contributing are people who have the skill to do so. Contribution in documentation area requires skilled writers; contribution artworks require skillful artists; contribution in code requires masterful programmers. If you have no skill, you can't contribute - however motivated you are.

The larger a project grows, the more complex it becomes. The more complex it comes, the more experience and skill is needed before somebody can contribute and improve the project. To gain more skill, somebody needs to invest the time and effort; and get themselves familiar with the project and or the relevant technology. Bigger "investment" means less number of people can "afford" it.

And this creates a paradox. The more successful a project becomes, the larger it becomes. The larger it becomes, the more complex it becomes. The more complex it becomes, the smaller the available talent pool.




Let's examine (2).
People contributes to FOSS projects for many reasons, some are less noble than others. Example:
- School projects (including GSoC)
- Some does it out for "paying back" ("I used FOSS software in the past, now I'm paying it back by contributing").
- Some does it for fame and want to show off their skils.
- Some does it just to kill time.
- Some does it for enhancing their resume (oh wow - look at the number of projects in my github account !!! (although most of them are forks from others ...)).
- Some does it because they are the only one who needs the feature they want, so they just get it done.
- Etc, the reasons are too numerous to list. But there is one **BIG** motivation I haven't listed above, and I'm going to write it down in a separate sentence, because it is worthy of your attention.

👉👉👉 Some does it because it is their day job; they are being paid to do so 👈👈👈




What can we conclude from (1) and (2)?
A larger, more complex project requires people with more skills.
More skills requires more investment.
More investment requires more motivation.
Motivation can be bought (=jobs).

Thus it leads to the inevitable that: the more complex a project becomes, the more chance that the people who are working on it are paid employees. And paid employees follows the direction of their employer.

In other words: a larger project has more chance of being co-opted by someone who can throw money to get people to contribute.

We will examine (b) in the next installment.

Posted on 5 Mar 2017, 14:56 - Categories: Linux General
Comments - Edit - Delete


xscreenshot is updated

xscreenshot, my dead simple screen capture program for X11, gets a facelift. It can now capture screenshot with mouse cursors in it; and it can also capture a single window. Oh, and now the filenames are created based on timestamp, rather than just a running number. You can get the latest version from here.

Posted on 4 Dec 2016, 22:15 - Categories: Linux General
Comments - Edit - Delete


How to destroy FOSS from within - Part I

Although I don't set the scope of this blog, from my previous posts it should be obvious that this is a technical blog. I rarely post anything which is non-technical in nature, here; and I plan to keep it that way.

But there have been things moving under the radar which, while in itself is not technical in nature, will affect technical people the most, and hit them the hardest. Especially people working in the FOSS, either professionally, or as hobby.

The blog post is too long to write in one go, so I will split this into a few posts.




For many years, I have been under the silly belief that nothing, nothing, short of global-level calamity (the kind that involves extinction of mankind), can stop the FOSS movement. The horse has left the barn; the critical mass has been reached and the reaction cannot be stopped.

The traditional way companies have fought each other is by throwing money for marketing and fire sale; outspending each other until the other cave in and goes bankrupt. Alternatively, they can swallow each other ("merge and acquire"); and once merged they just kill the "business line" or "the brand".

But they can't fight FOSS like that. Most FOSS companies survive on support. You can acquire them (e.g. MySQL), and then kill them; but one can easily spring up the next day (e.g. MariaDB). You cannot use fire sale on software licensing continuously, because the price of FOSS software licensing is eventually $0, and you can't compete with "free", well, not forever.

I still remember the days that a certain proprietary software company threw their flailing arms up in the air in exasperation, for not being able to compete against FOSS. The only thing they could do was bad-mouth FOSS and keep talking about "quality", and "amateur", and "unprofessional" when it was obvious their own products and conducts was none the better either.

So I was a believer that money cannot stop FOSS.

And how wrong I turned out to be.


Posted on 4 Dec 2016, 22:25 - Categories: Linux General
Comments - Edit - Delete


Booting your BIOS system via UEFI

In my previous post, I wrote about my exploration on running UEFI on BIOS based systems. The original motivation was to find "cure" to long boot time from USB flash drive, when initrd is large (like the case in Fatdog). I reasoned that since in many BIOS systems USB booting is done via hard-disk emulation (and thus it depends on the quality of the emulation), it would be better to run a firmware that recognises and is capable of booting from USB devices directly, without emulation.

I managed to get DUET working on qemu, but it didn't work on some of my target systems. Another alternative that I explored is CloverEFI, which is a fork of DUET. This worked better than DUET and it booted on systems where DUET wouldn't. However, I could not notice improvement on boot times. I haven't looked at DUET disk driver; I was hoping that it would provide a hardware UHCI/EHCI driver but probably doesn't - if it still depends on BIOS to access the USB via hard-disk emulation, then I've gained nothing.

So the initial objective can be considered as a failure.

However, come to think of it, I now have a better reason why you want to run UEFI on your BIOS system. When you run DUET, you are, essentially, "flashing" your BIOS and "upgrading" it with a newer UEFI firmware. While BIOS can do most of what UEFI can, there is one thing that it cannot do: it cannot boot from disk over 2TB in size†. This is not a hardware limitation, it is a consequence of applying a 36-year old design meant for 5 MB harddisk to today's world. With UEFI "update", you can format your disk using GPT and boots successfully from it.



Note†: It is possible to format the disk using GPT and have BIOS boots from it. I even described the process on my own article. That article, however, has a non-obvious limitation: the bootloader you use, must be capable of using the filesystem and booting the OS of your choice. The article was targeted for Linux users, thus syslinux was the chosen example and it would work beautifully. If, however, you want to boot other OS that syslinux doesn't understand, then you have to choose a different boot loader that:
a) can be booted by BIOS
b) understands GPT
c) can boot your OS of choice

In this case, booting GPT disk via DUET doesn't sound very unreasonable, considering that you've got more choice of UEFI bootloaders than non-UEFI ones for some specific OS.


Posted on 30 Apr 2016, 03:49 - Categories: Linux General
Comments - Edit - Delete


UEFI is the new DOS

As I was doing some reading about UEFI emulation on BIOS systems, I came across this interesting link: http://www.multiboot.ru/DUET.htm. In essence, that the linked page says, is that UEFI is essentially a clone of DOS. I'm inclined to agree.

This is why: the page elaborates and compares how (from end-user's perspective) they are essentially the same: there is a kernel (UEFI TSL and UEFI RT [explanation here]), there is a command line interpreter (shellx64.efi); there is a standard executable binary format (.efi files, which is some sort of flat-mode PE/COFF [details here]), there is a system library you can link to to build your own binaries (EDK - UEFI Dev Kit c.f. libc); and the fact that an .efi binary can do anything that you want it to do, just like a DOS program can. UEFI provider kernel-like services like handling input devices, manages text and graphical displays, manages filesystem (FAT32 - the successor to DOS' original filesystem of FAT). The shell is single-user just like COMMAND.COM. You can even extend its capability by installing "drivers" - filesystem drivers, network drivers, what what you. A 64-bit DOS with support for all modern hardware, here we come. What's not to like?

If your system comes with BIOS, you can run UEFI firmware using DUET (Developers' UEFI Environment). DUET is basically UEFI firmware on a disk (or flash drive, or optical drive) that you can "boot" from your BIOS. Rod Smith (the author of rEFInd, popular UEFI boot manager) wrote about it here. Once booted, DUET takes over the system and the whole system now acts as if it has an UEFI firmware. You can boot your UEFI-capable OS with it, or you can run shellx64.efi - welcome to UEFI DOS.

If your system already comes with UEFI firmware in ROM - that's the equivalent of having ROM DOS. Rejoice!

Posted on 30 Apr 2016, 03:18 - Categories: General
Comments - Edit - Delete


One bootx64.efi to rule them all

Barry recently blogged about gummiboot, which contains an interesting link to a feature of gummiboot that I overlooked previously. Barry linked to a phoronix article, which linked to a blog post from Harald.

TL;DR: gummiboot has a feature to build a single UEFI binary that contains Linux kernel, initrd, and the kernel command line. One UEFI file that contains the entire OS.

Yes, with this, you can have one bootx64.efi (bootloader) that actually contains the entire operating system (kernel, initrd, etc). While the idea is not new - Rob Landley pushed for ability to embed initrd into vmlinuz a long time ago - this is one step even better: embedding into the bootloader!

Why would we even bother? For one thing, it enables you to carry a stick with FAT32 partition in it, and a single file strategically located and named in /EFI/boot/bootx64.efi which contains the entire operating system for recovery and rescue purposes. It also means the return of boot-time virus - this time in the form of boot-loader virus (instead of boot-sector) from the days past if you are not careful.

Another thing is - if you run an embedded system with UEFI bootloader, after your OS are loaded entirely into the RAM, you can happily replace/upgrade your OS ("firmware") in one swop - there are no transactions needed to check if the bootloader update works ok, if the kernel update works okay, if the initrd works okay ... you just replace one file, if that one file update is okay (checkum matches etc) then all is good.

Harald has the code here, but it's somewhat tied to Fedora and systemd. Here is the extracted code that does the actual magic.
#!/bin/sh

echo your kernel cmdline > cmdline.txt
objcopy \
--add-section .osrel=/etc/os-release --change-section-vma .osrel=0x20000 \
--add-section .cmdline="cmdline.txt" --change-section-vma .cmdline=0x30000 \
--add-section .linux="/path/to/your/vmlinuz" --change-section-vma .linux=0x40000 \
--add-section .initrd="/path/to/your/initrd" --change-section-vma .initrd=0x3000000 \
linuxx64.efi.stub "$1"


The only catch is this - where does this "linuxx64.efi.stub" come from?

This EFI stub is built as part of the gummiboot bootloader. Gummiboot is "obsoleted" as its content are "absorbed" into systemd (and renamed to systemd-boot or something); but the code still exists and still works nicely here: https://cgit.freedesktop.org/gummiboot/ - you just need to checkout one commit before the final one (the final commit deletes everything to persuade people to move to systemd-boot).

I tested this with Fatdog64's initrd, with and without basesfs in it. Without basesfs - I ended up with 61MB bootx64.efi. With basesfs, I ended up with 366MB bootx64.efi. Both works as expected when launched from qemu as long as I have 2GB of RAM or more.



Posted on 19 Apr 2016, 11:09 - Categories: Linux General
Comments - Edit - Delete


Can a FOSS contributor retracts his/her contributions?

Another aspect of Rage-quit: Coder unpublished 17 lines of JavaScript and “broke the Internet” is from the comments I've read on-site: is it okay for a FOSS contributor to retract his/her contribution from a public site? Some says yes (contributor has rights) and some says no (once open it is open forever).

I would think the answer is obvious, if we separate the contribution and the publishing.

An author of an FOSS contribution has full rights to his contribution - he can retract, remove, destroy, change, or even change the license of his work. There is no question about it.

But due to the nature of FOSS, once the contribution is published, anyone can take it and re-publish it (with attributions as needed). The original author has no say about it and can't demand that they be taken down; because when he/she published the code he/she gave the world irrevocable right to do just that.

That does not mean the author cannot revoke his/her work, of course they can. It's just that he can't demand that everyone else must also take down the copy of his/her work.

Now, when author publishes his/her work through a 3rd party, however, he/she has to obey the terms of this 3rd party publisher. Some will give the rights to retract and delete, some do not. The point is, the publisher must make the terms and conditions clear.

Github for example allows you to retract and delete anything you publish on it - no trace will be left on its site if you choose to remove your work. Facebook is at the opposite - although at the beginning they didn't make it clear, nowadays it is pretty obvious that while you can delete your account and logins, whatever you submit to Facebook will live forever, and they can even use it long after you've removed your account. You give them that rights when you join Facebook. If you don't agree - well, don't use the Facebook. Simple.

Now back to npmjs.com. They should have made it clear that they allow (or disallow) contributors to remove their contributions; and the stand by that. If they allow authors to remove their contributions, people who use the service knows that anything on npmjs should be considered ephemeral and can disappear at anytime - thus they can take mitigative actions (or choose not to use the service at all). If they don't allow removals, authors who contribute to the service knows that anything they choose to publish through npmjs.com is perpetual and can then choose whether or not they want to contribute. But npmjs.com can't have it both ways - because in the end you will irritate both the authors, and the end users.



Posted on 27 Mar 2016, 23:41 - Categories: General
Comments - Edit - Delete


Local copy anyone?

I just read this:
Rage-quit: Coder unpublished 17 lines of JavaScript and “broke the Internet”
.

There are too many interesting aspects to consider from the article, but the one that surprised me the most is this: somebody removed their contribution from a public repo, and everything broke? Really? Haven't anyone heard of "local copy"?

Posted on 27 Mar 2016, 21:25 - Categories: General
Comments - Edit - Delete


Updated kbstate and a2dp-alsa

I've updated kbstate to detect multiple keys from multiple event devices at once, making usage a lot simpler.

I've also updated a2dp-alsa to work correctly with Android devices; and improve it so that a2dp-buffer is no longer necessary; and fix the Makefile for newer gcc. It can now be used as "pass-through router" reliably.

Posted on 28 Dec 2015, 20:48 - Categories: Linux General
Comments - Edit - Delete


Pages: [1] [2] [3] [4] [5] ...