From the desk of James Fatdog64, FatdogArm, Linux, ARM Linux, and others Github fallout and what we can learn from that General Hahaha. What would I say. <br /> <br />It's the talk of the town: <a href= target=_blank>Microsoft buys Github</a>. <br /> <br />Why are you surprised? It's long time coming. See my previous <a href=?viewDetailed=00182 target=_blank>articles</a> about FOSS. Add the fact that Github fails to make a profit. Their investors want out; they would welcome a buyer. <b>**Any**</b> buyer. <br /> <br />But today I don't want to talk about the sell-out; there are already too many others discussing it. Instead I'd like to ponder on the impact. <br /> <br />Rightly or wrongly, many projects have indicated that they will move away. These projects will not be in github anymore, either in near future or in immediate future. What's going to happen? Some of these projects are libraries, which are dependencies used by other projects. <br /> <br />People have treated github as if it is a public service (hint: it never has been). They assume that it will always exist; and always be what it is. Supported by the public APIs, people build things that depends on github presence; that uses github features. One notable "things" that people build are automated build systems, which can automatically pull dependencies from github. Then people build projects that depends on these automated build tools. <br /> <br />What happens to these projects, when the automated build tools fail because they can no longer find the dependencies on github (because the dependent project has now moved elsewhere)? They will fail to build, of course. And I wonder how many projects will fail in the near future because of this. <br /> <br />We've got a hint a couple years ago, <a href= target=_blank>here</a> (which I also covered in a blog post, <a href=?viewDetailed=00158 target=_blank>here</a>). Have we learnt anything since then? I certainly hope so although it doesn't look like so. <br /> <br />It's not the end of the world. Eventually the author of the automated build tools will "update" their code libraries and will attempt to pull the dependencies from elsewhere. You probably need a newer of said build tools. But those github projects don't move at one step; they move at the convenience of the project authors/maintainers. So, you will probably need to constantly updates your automated build tools to keep track with the new location where the library can be pulled from (unless a central authority of sorts is consulted by these build tools to decide where to pull the libraries from - in this case one only needs to update said central authority). It will be an "inconvenience", but it will pass. The only question is how long this "inconvenience" will be. <br /> <br />How many will be affected? I don't know. There are so many automated build tools nowadays (it used to be only "make"). Some, which host local copies of the libraries on their own servers, won't be affected (e.g. maven). But some which directly pulls from github will definitely get it (e.g. gradle). Whatever it is, it's perhaps best to do what I said in my earlier blog post - make a local copy of any libraries which are important to you, folks! <br /> <br /><hr> <br /> <br />Github isn't the only one. On a larger scale (than just code repositories and code libraries), there are many "public service" services today, which aren't really public service (they are run by for-profit entities). Many applications and tools depend on these; and they work great while it lasts. But people often forget that those who provide the services has other goals and/or constraints. People treat this public service as something that lasts forever, while in actuality these services can go down anytime. And every time the service goes down, it will bring down another house of cards. <br /> <br />So what to do? <br /> <br />It's common sense, really. If you really need to make your applications reliable, then you'd better make sure that whatever your application depends are not "here today gone tomorrow". If you depend on certain libraries make sure you have local copy. If you depend on certain services make sure that those services are available for as long your need it. If you cannot make sure of that, then you will have to run your own services to support your application, period. If you cannot run the services in house (too big/too complex/too expensive/etc), then make sure you external services you depend on is easily switchable (this means standards-based protocols/APIs with tools for easy exporting/importing). Among other things. <br /> <br />Hopefully this will avoid another gotcha when another "public service" goes down. <br /> New Fatdog64 is in the works Fatdog64'Linux It's the time of the year again. The time the bears wake up from hibernation. After being quiet for a few months, the gears start moving in the Fatdog64 development. <br /> <br />Fatdog64 721 was released over 4 months ago. It was based on LFS 7.5, which was cutting edge back in 2014 (although some of the packages have younger ages as they got updated in every release). <br /> <br />As I have indicated earlier (in 720 beta release, <a href=?viewDetailed=00178 target=_blank>here</a>), 700 series is showing its age. Compared to previous series, the 700 series is actually the longest-running Fatdog series so far, bar none. <br /> <br />But everything that has a beginning also has an end. It's time to say goodbye to 700 series and launch a new series. <br /> <br />The new series will be based on LFS 8.2 (the most recent as of today). This gives us glibc 2.27 and gcc 7.3.0. Some packages are picked up from SVN version of BLFS, which is newer. <br /> <br />How far have we gotten with this new release? Well, as of yesterday, we've got Xorg 1.20.0 running with twm, xterm and oclock app running from its build sandbox. <br /> <br /><a rel=prettyPhoto href=images/xorg-twm-xterm.png ><img rel=prettyPhoto src=thumbs/xorg-twm-xterm.png /></a> <br /> <br />Hardly inspiring yet, but if you know the challenge we faced to get there, it's a great milestone. <br /> <br />As it is usual with Fatdog64, however, it will be released when it is ready. So don't hold your breath yet. If 721 is working well for you, hang on to it (I do!). But at least you know that wouldn't be the last time you heard of this dog. <br /> <br /> <br /><hr> <br /> <br />On a special note, I'd like to say special thanks to "step" and "Jake", the newest members of Fatdog64 team (and thus is still full of energy - unlike us the old timers hehe). While I have been shamelessly away from the forum for many reasons, "step" and "SFR" continue to support Fatdog64 users in the forum. My heartful thanks to both of them. <br /> <br />Of course, also thanks to the wonderful Fatdog64 users who continue to support each other. Measure LED forward voltage using Arduino General Arduino is used for many things, including testing and measuring component values. <br /> <br />Somebody has made a resistance meter: <br /><a href= target=_blank></a> <br /> <br />Another has made a capacitance meter: <br /><a href= target=_blank></a> <br /> <br />Yet another has made an inductance meter: <br /><a href= target=_blank></a> <br /> <br />There is one missing: determine LED forward voltage. <br /> <br />LED comes in variety of colours, and these variations comes from different materials and different doping densities. As a result, the forward voltage of these LEDs are also not the same - lower-energy-light LEDs (e.g. red) usually require less forward voltage than higher-energy-light LEDs (white or blue). The only sure way to know is by reading its datasheet. <br /> <br />But what if you don't have the datasheet? Or you don't know what is the datasheet for some particular LEDs? (e.g. LEDs you salvage from some old boards). <br /> <br />The following Arduino circuit should help you. It helps you to figure out what is the forward voltage for an LED. <br /> <br /><hr> <br /> <br /><b>Connections</b> <br /> <br /><img src=images/LED-fwd-voltage_schem.png /> <br /> <br /><a rel=prettyPhoto href=images/LED-fwd-voltage_bb.png ><img rel=prettyPhoto src=thumbs/LED-fwd-voltage_bb.png /></a> <br /> <br /><hr> <br /> <br /><b>Sketch</b> <br />Get the <a href=downloads/FindLedFwdVoltage2.ino target=_blank>sketch</a>. <br /> <br /><hr> <br /> <br /><b>Principle of operation</b> <br /> <br />Initially we have both D3 and D4 as high (=5V). This charges the capacitor, and turns off the LED. <br /> <br />Then drop both D3 and D4 to low. The diode prevents the capacitor from bleeding off its charge through D3, so the only way it can discharge now is via the LED. <br /> <br />A0 measures the capacitor voltage. <br />A2 measures series resistor voltage. <br />A2-A0 gives you the LED voltage. <br /> <br />In the ideal situation, you expect to see that A0 and A2 will keep dropping off until the conduction suddenly stops, A2 becomes zero (because no more current flows through it), and then A0 will give you the LED forward voltage. <br /> <br />Of course, in real world this does not happen. If you test that circuit you will find that the LED will keep giving out light even when it's below its official forward voltage, and if you wait until the current is zero, the A0 voltage you get will be very much below the nominal forward voltage. <br /> <br />So how do we know how to stop measuring? Well, most LEDs are usually specified to be "conducting" when it pass at least 5mA of current. So when we detect the current across the resistor to be less than 5mA, we stop measuring and declare that the A2-A0 of the last measurement as the forward voltage. <br /> <br />Oh, how do you get the LED current? LED current is the same current that pass through its series resistor (ignoring current going out to A2). The current in the series resistor is simply its voltage (A2) divided by its resistance (130R). <br /> <br /><hr> <br /> <br /><b>Caveat</b> <br />The voltage-current relation of a LED is the same as any diode - it's exponential. In other words, the forward voltage depends on the amount of current that flows (or better yet: the current that flows depend on the applied voltage). There is no single one fixed "forward voltage"; the LED will actually conduct and shine (with varying brightness) on voltages lower or higher than the official forward-voltage. <br /> <br />Ok, that helps. But how about forward current? <br /> <br />Typical LEDs uses 20mA forward current. This is regardless of the colour or the forward voltage. So there you have it. Of course, main exception for this rule is super-bright high-wattage LEDs which is meant for room illumination or for torches. These can easily pass 100mA, and some can even crank up to 500mA or more. Forward voltages on these kind of LEDs can vary a lot depending on whether you're passing 5mA or 500mA. The tester above won't work properly with these kind of LEDs. <br /> <br /><b>FAQs</b> <br />Q1: Why pin D3 and D4? Not D8 or D9? <br />A1: Because I like it that way. You can change it, but be sure to change the code too. <br /> <br />Q2: Why analog pins A0 and A2? <br />A2: Because I like it that way too. Actually, that's because an earlier design used 3 analog pins, but later on I found out that one of them (located in A1) isn't necessary, but I've already wired the circuit with A2, so it stays there. Of course you can change it, but remember to update the code too. <br /> <br />Q3: Why do you use 130R? <br />A3: 130R is the series resistor you use for LEDs with 2.4 forward voltage (green LEDs usually), which is somewhat in the middle of the range for LED forward voltages. Plus, they're what I have laying around. <br /> <br />Q4: Why 470uF? <br />A4: That's what I have laying around too. You can use other values, but make sure they're not too small. <br /> <br />Q5: The diode - IN4001 - you also use that because that's what you have laying around? <br />A5: Actually you can use any diode. In my circuit I actually used IN4007 because that's what I have laying around :) <br /> <br />And finally: <br />Q6: Why do you have separate D3 and D4? Since they will be brought HIGH and LOW at the same time, why not just use one pin? <br />A6: Yes, you can do it that way (remember to change the code). But using two pins make it clearer of what is happening. <br /> <br /> Spectre on Javascript? Linux'General The chaos caused by Spectre and Meltdown seems to have quieten down. Not because the danger period is over, but well, there are other news to report. As far as I know the long tail of the fix is still on-going, and nothing short of hardware revision can really fix them without the obligatory reduction in performance. <br /> <br />Anyway. <br /> <br />One of the those who quickly released a fix, was web browser vendors. And the fix was to "reduce granularity of performance timers" (in Javascript), because with high-precision timers, it is possible to do Spectre-like timing attack. <br /> <br />This, I don't understand. How could one perform Spectre or even Spectre-like timing attack using Javascript? Doesn't a Javascript program run in a VM? How would it be able to access its host memory by linear address, let alone by physical address? I have checked wasm too - while it does have pointers, a wasm program is basically an isolated program that lives in its own virtual memory space, no? <br /> <br />In other words - the fix is probably harmless, but could one actually perform Spectre or Spectre-like attack using browser-based Javascript in the first place? <br /> <br />That is still a great mystery to me. May be one day I will be enlightened. Spectre and Meltdown Linux'General Forget about the old blog posts for now. <br /> <br />Today the hot item is Spectre and Meltdown. It's a class of vulnerabilities caused by CPU bugs that allows an adversary to steal sensitive data, even without any software bugs. Nice. <br /> <br />Everyone and his dog is talking about it, offering their opinions and such. Thusly, I feel compelled to offer my own. <br /> <br />Mind you, I'm not a CPU engineer, so don't take this as infallible. In fact, I may be totally wrong about it. So treat it like how you treat any other opinions - verify and cross-check with other sources. That being said, I've done some research about it myself, so I expect that I'm not too much fooled by myself :) <br /> <br /><hr> <br /><b>Overview</b> <br /> <br />There are 3 kinds of vulnerabilities: Spectre 1, Spectre 2, and Meltdown. <br /> <br />In very simplified terms, this is how they work: <br />1. <span class=itr>Spectre 1</span> - using speculative execution, leak sensitive data via cache timing. <br />2. <span class=itr>Spectre 2</span> - by poisoning branch prediction cache, makes #1 more likely to happen. <br />3. <span class=itr>Meltdown</span> - Application of Spectre 1: read kernel-mode memory from non-privileged programs. <br /> <br /><hr> <br /><b>How they work</b> <br /> <br />So how exactly do they work? <a href= target=_blank></a> gives you the super details of how they work, but in the nutshell, here it is: <br /> <br /><span class=itr>Spectre 1</span> - Speculative execution is a phantom CPU operation that supposedly does not leave any trace. And if you view it from CPU point of view, it really doesn't leave any trace. <br /> <br />Unfortunately, that's not the case when you view it from outside the CPU. From outside, a speculative execution looks just like normal execution - peripherals can't differentiate between them; and any side effects will stay. This is well known, and CPU designers are very careful not to perform speculative executions when dealing with external world. <br /> <br />However, there is one peripheral that sits between CPU and external world - the RAM cache. There are multiple levels of RAM cache (L1, L2, L3), some these belongs to the CPU (as in, located in the same physical chip), some are external to the CPU. In most designs, however, the physical location doesn't matter: wherever they are, these caches aren't usually aware of differences between speculative and normal execution. And this is where the trouble is: because the RAM cache is unable to differentiate between these two, <i>any execution</i> (normal or speculative) will leave an imprint in the RAM cache - certain data may be loaded or removed from the cache. <br /> <br />Although one cannot read the contents of RAM cache directly (that would be too easy!), one can still infer information by checking whether certain set of data in inside the RAM cache or not - by timing them (if it's in the cache, data is returned fast, otherwise it's slow). <br /> <br />And that's how Spectre 1 works - by doing tricks to control speculative execution, one can perform an operation which normally isn't allowed to leave RAM cache imprint, which can then be checked to gain some information. <br /> <br /><span class=itr>Spectre 2</span> - Just like memory cache and speculative execution, branch prediction is a performance-improvement technique used by CPU designers. Most branches will trigger speculative execution; branch prediction (when the prediction is correct) makes that speculation run as short as possible. <br /> <br />In addition, certain memory-based branch ("indirect branch") uses small, in-CPU cache to hold the location of the previous few jumps; these are the locations from which speculative execution will be started. <br /> <br />Now, if you can fill this branch prediction cache with bad values (="poisoning" them), you can make CPU to perform speculative execution at the wrong location. Also, by making the branch prediction errs most of the time, you make that speculative execution longer-lived than that it should be. Together, they make it more easier to launch Spectre 1 attack. <br /> <br /><span class=itr>Meltdown</span> - is an application of Spectre 1 to attempt to read data from privileged and protected kernel memory, by non-privileged program. Normally this kind of operation will not even be attempted by the CPU, but when running speculative execution, some CPU "forget" to check for privilege separation and just blindly do it what it is asked to do. <br /> <br /><hr> <br /><b>Impact</b> <br /> <br />Anything that allows non-privileged programs to read and leak infomation from protected memory is bad. <br /> <br /><hr> <br /><b>Mitigation Ideas</b> <br /> <br />Addressing these vulnerabilities - especially Spectre - is hard because the cause of the problem is not a single architecture or CPU bugs or anyhing like it - it is tied to the concept itself. <br /> <br />Speculative execution, memory cache, and branch prediction are all related. They are time-proven performance-enhancing techniques that have been employed for decades (in consumer microprocessor world, Intel was first with their Pentium CPU back in 1993 - that's 25 years ago as of this time of writing. <br /> <br /><span class=itr>Spectre 1</span> can be stopped entirely, if speculative execution does not impact the cache (or if the actions to the cache can be un-done once speculative execution is completed). But that is a very expensive operation in terms of performance. By doing that, you more or less lose the speed gain you get from speculative execution - which means, may as well don't bother to do speculative execution in the first place. <br /> <br /><span class=itr>Spectre 2</span> can be stopped entirely if you can enlarge the branch prediction cache so poisoning won't work. But there is a physical limit on how large the branch cache can be, before it slows down and lose its purpose as a cache. <br /> <br />Alternatively, it can be stopped again in its entirety, if you disable speculative execution during branching. But that's what a branch prediction is for, so if you do that, may as well drop the branch prediction too. <br /> <br /><span class=itr>Meltdown</span> however, is easier to work out. We just need to ensure that speculative execution honours the memory protection too, just like normal execution. Alternatively, we make the kernel memory totally inaccessible from non-privileged programs (not by access control, but by mapping it out altogether). <br /> <br /><hr> <br /><b>Mitigation In Practice</b> <br /> <br /><span class=itr>Spectre 1</span> - There is no fix available, yet (no wonder, this is the most difficult one). <br /> <br />There are clues that some special memory barrier instructions (i.e. LFENCE) can be modified (perhaps by microcode update?) to stop speculative execution or at least remove the RAM cache imprint by undo-ing cache loading during speculative execution, on-demand (that is, when that LFENCE instruction is executed). <br /> <br />However, even when it is implemented (it isn't yet at the moment), this is a piecemail fix at best. It requires patches to be applied to compilers, or more importantly any programs capable of generating code or running interpreted code from untrusted source. It does not stop the attack fully, but only makes it more difficult to carry it out. <br /> <br /><span class=itr>Spectre 2</span> - Things is a bit rosier in this department. The fix is basically to disable speculative execution during branching. This can be done in two ways. In software, it can be used by using a technique called "retpoline" (you can google that) - which basically let speculative execution chases its own tails (=thus effectively disabling it). In hardware, this can be done by the CPU exposing controls (via microcode update) to temporarily disable speculative execution during branching; and then the software making use of that control. <br /> <br />Retpoline is available today. The microcode update is <i>presumably</i> available today for certain CPUs, and the Linux kernel patches that make use of that branch controls are also available today. However, none of them have been merged into mainline yet. (Certain vendor-specific kernel builds already have these fixes, though). <br /> <br />Remember, the point of Spectre 2 is to make it easier to carry out Spectre 1, so by fixing Spectre 2 it makes Spectre 1 less likely to happen to the point of making it irrelevant (hopefully). <br /> <br /><span class=itr>Meltdown</span> - This is where the good news finally is. The fix can be done, again, via CPU microcode update, or by software. Because it may take a while for that microcode update to happen (or not all), the kernel developers have come up with a software fix called KPTI - Kernel Page Table Isolation. With this fix, kernel memory is completely hidden from non-privileged programs (that's what "isolation" stands for). This works, but with a very high-cost in performance: it is reported to be 5% at minimum, and may go to 30% or more. <br /> <br /><hr> <br /> <br /><b>Affected CPUs</b> <br /> <br />Everyone has a different view on this, but here is my take about it. <br /> <br /><span class=itr>Spectre 1</span> - All out-of-order superscalar CPUs (no matter what architecture or vendor or make) from Pentium Pro era (ca 1995) onwards are susceptible. <br /> <br /><span class=itr>Spectre 2</span> - All CPU with branch prediction that use cache (aka "dynamic branch prediction") are affected. The exact techniques to carry out Spectre 2 attack may be different from one architecture to another, but the attack concept is applicable to all CPUs of this class. <br /> <br /><span class=itr>Meltdown</span> - certain CPU get it right and honour memory protection even during specutlative execution. These CPUs don't need the above KPTI patches and they are not affected by Meltdown. Some says that CPUs from AMD are not affected by this; but with so many models involved it's difficult to be sure. <br /> <br /><hr> <br /> <br />So that's it. It does not sound very uplifting, but at least you get a picture of what you're going to have for the rest of 2018. And the year has just started ... <br /> <br />EDIT: If you don't understand some of the terms used in this article, you may want to check <a href= target=_blank>this excellent article</a> by Eben Upton. Old blog posts General Long before time began, I had a blog. It was on a shared blogospace. I have long forgotten about it, but a few days ago I remembered about it and visited the site. To my surprise, it still exists; my old posts are still there. As if time stands still. <br /> <br />I tried to login to that site, but Google wouldn't let me. I used Yahoo email for the login id; and I haven't accessed that email account for ages. When I tried to do that, it wouldn't recognise my password. In the light of Yahoo's massive data breach a couple years ago, this isn't surprising. I tried to recover the account using my other emails, but it didn't work either. Well, that's too bad, but I wouldn't have expected an abandoned blog to exist at all. <br /> <br />What I am going to do, instead, is I will scrape the text off that blog; and I will re-post some of the more interesting ones here. There are some unfinished posts there too; those whose subject I still remember I will publish the complete version here too. <br /> <br /> Fatdog64 721 is Released Fatdog64'Linux In the light of recent Spectre and Meltdown fiasco; the Linux kernel team has released patches to sort of workaround the problem. <br /> <br />It's not free, you will get a performance hits anywhere from 5% to 30% depending on the kind of apps that you use (more if you use virtual machines), but at least you're protected. <br /> <br />We have released Fatdog64 721 with updated kernel (4.14.12) that comes with this workaround. <br /> <br />You can, however, decide risk it and not use the workaround, by putting "<b>pti=off</b>" boot parameter. You'd better know what you're doing if you do that, though. <br /> <br />Apart from that, this release also supports microcode update and hibernation. We've bundled the latest microcode from both Intel (dated 8 Jan 2018) and AMD (latest from linux-firmware as of 10 Jan 2018); however it is unclear whether any of them address the problem. <br /> <br /><hr> <br /><a href= target=_blank>Release Notes</a> <br /><a href= target=_blank>Announcement (same announcement as 720).</a> <br /> <br />Get it from the usual locations: <br /><a href= target=_blank>Primary site - (US)</a> <br /><a href= target=_blank> - European mirror</a> <br /><a href= target=_blank> - Australian mirror</a> <br /><a href= target=_blank> - European mirror</a> <br /> <br /> How to destroy FOSS from within - Part 4 Linux'General This is the fourth installment of the article. <br />In case you missed it, these are <a href=?viewDetailed=00168 target=_blank>part one</a> and <a href=?viewDetailed=00172 target=_blank>part two</a> and <a href=?viewDetailed=00177 target=_blank>part three</a>. <br /> <br />I originally planned to finish this series of articles at the end of last year, so we start 2018 with a more uplifting note - but didn't have enough time so there we are. Anyway, we already start 2018 with the <a href= target=_blank>biggest security compromise ever</a> (that CPU-level memory protection can be broken even without any kernel bugs, that kernel memory of any OS in the last 20 years can be read by userspace programs) - one more bad news cannot make it worse. <br /> <br />And now, for the conclusion. <br /> <br /><hr> <br /> <br />By now you should already see how easy it is to destroy FOSS if you have money to burn. <br /> <br />From <a href=?viewDetailed=00172 target=_blank>Part 2</a>, we've got conclusion that <span class=itg>"a larger project has more chance of being co-opted by someone who can throw money to get people to contribute"</span>. This is the way to co-opt the project from the bottom-up - by paying people to actively contribute and slowly redirect the project to the direction of the sponsor. <br /> <br />From <a href=?viewDetailed=00177 target=_blank>Part 3</a>, we've got the conclusion that <span class=itg>"direction of the project is set by the committers, who are often selected either at the behest of the sponsor, or by the virtue of being active contributors"</span>. This is the way to co-opt the project from top-down - you plant people who will slowly rise to the rank of the committers. Or you can just become a "premium contributor" by donating money and stuff and instantly get the right to appoint a committer; and when you have them in charge, simply reject contributions that are not part of your plan. Or, if you don't care about being subtle, simply <span class=emr>"buy off"</span> the current committers (= employ them). <br /> <br />In both cases, people can revolt by forking, if they don't have the numbers, the fork will be futile because: <br />a) it will be short-lived <br />b) it will be stagnant <br />and in either case, people will continue to use the original project. <br /> <br />It's probably not the scenario you'd like to hear, but that's how things unfold in reality. <br /> <br /><hr> <br /> <br />In case you think that this is all bollocks, just look around you. <br /> <br />Look around the most important and influential projects. <br /> <br />Look at their most active contributors. <br /> <br />Ask yourself, why are they contributing, who employs them. <br /> <br />Then look at the direction these people have taken. Look very very closely. <br /> <br />Already, a certain influential SCM system used to manage a certain popular OS, is now more comfortable to run on a foreign OS than the OS that it was originally developed (and is used to manage). <br /> <br />Ask yourself how can this be. "Oh, it's because we have millions of downloads of for that foreign OS, so that foreign OS is now considered as a top-tier platform and we have to support that platform" (to the extent that we treat the original OS platform as 2nd tier and avoid using native features which cannot be used on that foreign OS, because, well, millions of downloads). Guess what? The person who says that, works for the company that makes that foreign OS. And not only that, he's got the influence, because, well, there are a lot of "contributors" coming from where he works. <br /> <br />What's next? bash cannot use "fork()" because a foreign OS does not support fork()? <br /> <br />Who pays for people who works on systemd? Who pays for people to work on GNOME? Who pays for people to work on KDE? Who pays for people who works on Debian? Who are the members of Linux Foundation? You think these people work out of the kindness of their heart for the betterment of humanity? Some of them certainly do. Some, however, work for the betterment of themselves - FOSS be damned. <br /> Fatdog64 720 Final is released Fatdog64'Linux Fatdog64 720 Final was released on 20 December 2017, after about three weeks of beta (720 beta was announced <a href=?viewDetailed=00178 target=_blank>here</a>). <br /> <br />It was a hectic before Christmas so I didn't get to announce it here in my blog. In fact, Barry Kauler (original author of Puppy Linux) <a href= target=_blank>announced it earlier</a> that I do <img src=images/smilies/teeth.gif /> - which is quite a tribute for us <img src=images/smilies/happy.gif /> <br /> <br />There isn't much changes between this and beta, other than a few bug fixes - as I said earlier, 720 beta was actually quite stable. <br /> <br />One new "feature" made it there: 720 now comes with two initrds (dual-initrds) - the first one is the usual huge initrd, and the second one is a very small initrd (around 3.5MB) with ability to "load" the larger initrd. This was a suggestion from forum member LateAdopter, which we "adopted" <img src=images/smilies/teeth.gif /> <br /> <br />Why the need for that? Some people have been complaining about the slow booting speed of Fatdog64 due to its huge initrd. There are many reasons for this slowness but it's mainly because: <br />a) old BIOS <br />b) old bootloaders (grub4dos/grub-legacy) <br />c) boot from modern, large filesystem such as ext4 with size over 16GB. <br /> <br />This particular combination is especially toxic - bootloaders usually use BIOS calls to get data from the disk, and old bootloaders don't understand new filesystem well so while they can load from it, they only do it very very slowly. <br /> <br />Nevertheless, the new "nano-initrd" (as I call it) to the rescue. The small initrd will be loaded fast enough by the bootloader, and then Lnux kernel takes over and load the huge initrd - with the use of modern, optimised code. So booting remain fasts. <br /> <br />However, nothing comes for free. It's basically a stripped down initrd (as explained <a href=/wiki/wiki.cgi/MinimalFatdogBoot target=_blank>here</a> so along with the cutdown in size, a lot of other stuff must be sacrificed too. Don't expect the nano-initrd to be able to boot from exotic environments. <br /> <br /><hr> <br /><a href= target=_blank>Release Notes</a> <br /><a href= target=_blank>Forum announcement</a> <br /> <br />Get it from the usual locations: <br /><a href= target=_blank>Primary site - (US)</a> <br /><a href= target=_blank> - European mirror</a> <br /><a href= target=_blank> - Australian mirror</a> <br /><a href= target=_blank> - European mirror</a> <br /> How to create Nvidia driver SFS for Fatdog and Puppy Fatdog64'PuppyLinux'Linux If you need to use Nvidia driver (instead of the open-source nouveau driver), I've written the steps to prepare the driver SFS yourself. <br /> <br />I wrote this article because Nvidia driver is sensitive to kernel changes; each kernel changes requires a rebuild of the driver. And we usually don't provide nvidia driver for beta releases. <br /> <br />Also, there are variations of the nvidia driver (long term, short term, legacy, etc) supporting different cards. Creating a driver for each variation, and re-creating them every time the kernel change, takes a lot of time. <br /> <br />So I've published the way for you to do that yourself. The steps enable you to create the SFS yourself, or, if you can't be bothered about the SFS, it will install the driver directly for you. <br /> <br />As a bonus, it should work on recent Puppy Linux too. <br /> <br />The instruction is <a href=/wiki/wiki.cgi/CreateNvidiaDriverSFS target=_blank>here</a>. <br /> <br />Note: this article is an update of the original instructions I wrote <a href= target=_blank>here</a> (which is XenialPup64 specific). I accidentally removed glibc Fatdog64'Linux I accidentally removed glibc. <br /> <br />I was running Fatdog build process and I wanted to remove glibc from its chroot. <br /> <br />The correct command to do that was this: <pre class=code><code class=highlight>ROOT=chroot removepkg glibc32 glibc</code></pre> <br />but I typed in the wrong way: <pre class=code><code class=highlight>removepkg ROOT=chroot glibc32 glibc</code></pre> <br /> <br />This has the unintended effect of attempting to remove <span class=itb>ROOT=chroot</span> package <br />(which didn't exist), and then <span class=itr>glibc32, and glibc</span>. Of course the removal wasn't fully successful, but the dynamic linker <span class=emb>/lib64/ </span>was deleted and that's enough to stop almost anything. <br /> <br />In a normal distro this would probably require an immediate re-install. <br /> <br />In Puppy-like distro (including Fatdog) all you need to do is to boot pristine, disregarding any savefile/savefolder (<span class=itb>pfix=ram</span> for Puppies and <span class=itg>savefile=none</span> for Fatdog); and then clean up the mess you've created by the accidental deletion. This is usually done by deleting the whiteouts, so glibc can "show up" again in the layered filesystem. <br /> <br />But I was in the middle of something and I really didn't want to reboot and abandoned what I was doing. What to do? I still had a few terminals open, is there anything I could do to salvage the situation? <br /> <br />Fortunately, Fatdog has a failover mechanism for situation like this. <br /> <br />Fatdog has a static busybox located in <span class=itg>/aufs/pup_init/bin/busybox</span>. This busybox is linked with complete set of applets, with its shell (ash) compiled to prefer internal busybox applets instead of external commands. <br /> <br />By running its shell <pre class=code><code class=highlight>/aufs/pup_init/bin/busybox ash</code></pre> <br />I am back in a working shell, and I can do "ls" and other things as needed because the busybox is fully static and doesn't need glibc. <br /> <br />Inside there, I then run Fatdog's whiteout clean up script <pre class=code><code class=highlight>sh</code></pre> <br />which run nicely because busybox has enough applets to support it. This removes the whiteout, in effect, undo-ing the deletion. <br /> <br />But trying to do "ls" on another terminal still indicate that glibc isn't installed yet. This is because aufs, the layered filesystem, isn't aware that we have "updated" its layer behind its back. All we need to do is to tell it to re-evaluate its layers. <br /> <br />This can be done by running (from the terminal that runs static busybox shell) this command <pre class=code><code class=highlight>mount -i -t aufs -o remount,udba=reval aufs /</code></pre> <br />Once this is done, the system is back to live, and the project is saved. <br /> Fatdog64 720 Beta is Released Fatdog64'Linux The next release of Fatdog64 is finally here! <br /> <br />Well, the beta version at least. I actually think this is the next stable release. We have been running this for weeks ourselves, but because we have made so many changes, it's good to treat it as beta and test it on wider audience. <br /> <br />A lot of improvements since the last release; lots of package updates, and lots of fixes too. However this is still based on 710 as the base. <br /> <br />We plan to follow this one up with a Final soon, hopefully before Christmas. <br /> <br />What's next? <br /> <br />Once it goes to final, it would probably be sunset for the 700 series. While 720 is running very well, it is showing its age. Some binary packages refuses to run on it, demanding a newer glibc, for example. <br /> <br />The decision isn't final yet, and the 800 series isn't probably going to be started very soon (we all need to catch our breaths). Meanwhile, enjoy it while you can. <br /> <br /><a href= target=_blank>Release Notes</a> <br /><a href= target=_blank>Forum announcement</a> <br /> <br />Get it from the usual locations: <br /><a href= target=_blank>Primary site - (US)</a> <br /><a href= target=_blank> - European mirror</a> <br /><a href= target=_blank> - Australian mirror</a> <br /><a href= target=_blank> - European mirror</a> <br /> How to destroy FOSS from within - Part 3 Linux'General This is the third installment of the article. <br />In case you missed it, these are <a href=?viewDetailed=00168 target=_blank>part one</a> and <a href=?viewDetailed=00172 target=_blank>part two</a>. <br /> <br /><hr> <br /> <br />In the previous post, I stated that the direction in an FOSS project is set by two groups of people: <br />a) People who work on the project, and <br />b) People who are allowed to work on the project. <br /> <br />We have examined (a) in <a href=?viewDetailed=00172 target=_blank>part two</a>, so now let's examine (b). <br /> <br />Who are the people allowed to work on the project? Aren't anyone allowed? The answer is a solid, resounding, "NO". <br /> <br />Most FOSS projects, if they are contributed by more than one person, tend to use some sort of source code management (SCM). In a typical SCM system, there are two classes of users: users with commit rights, who can modify the code in the SCM (committers), and read-only users, who can read and check-out code from the SCM but cannot change anything. <br /> <br />In most FOSS projects, the number of committers are much smaller than the read-only users (potentially anyone in the world with enough skill is a read-only user if the SCM system is opened to the world e.g. if you put the code in a public SCM repository e.g. github and others). <br /> <br />The committers don't necessarily write code themselves. Some of them do; some of them just acts are a "gatekeeper"; they receive contributions from others; vet and review the changes; and "commits" them (=update the code to the SCM) when they think that the contribution has met certain standards. <br /> <br />Why does it matter? Because eventually these committers are the ones that decide the direction of the project by virtue of deciding what kind of changes are accepted. <br /> <br />For example, I may be the smartest person in the world, I may be the most prolific programmer or artists in the world; if the committers of the project I want to contribute don't accept my changes (for whatever reason); then for all practical purposes I may as well don't exist. <br /> <br /><hr> <br /> <br />Hang on you say, FOSS doesn't work this way. I can always always download the source (or clone the repo) and work it on my own! No approval from anybody is required! <br /> <br />Yes, you can always do that, but that's you doing the work privately. That's not what I mean. As far as the project is concerned, as far as the people who use that project is concerned; if your patches aren't committed back to the project mainline, then you're not making any changes to the project. <br /> <br />But hey hey, wait a minute here you say. That's not the FOSS I know. The FOSS I know works like this: If they don't let me commit this large gobs of code that I've written, what's stopping me to just publish my private work and make it public for all to see and use? In fact, some FOSS licenses even require me to do just that! <br /> <br />Oh, I see. You're just about to cast the most powerful mantra of all: "", is it? <br /> <br /><hr> <br /> <br />I regret to inform you that you have been misinformed. While the mantra is indeed powerful, it unfortunately does not always work. <br /> <br />Allow me to explain. <br /> <br />Fork usually happens when people disagree with the committers on the direction they take. <br /> <br />Disagreement happens all the time, it's only when they are not reconcilable that fork happens. <br /> <br />But the important question is: what does the forking accomplish in the end? <br /> <br />Personally, I consider a fork to be successful if it meets one of two criteria: <br /> <br />a) the fork flourishes and develops into a separate project, offering alternatives to the original project. <br /> <br />b) the fork flourishes and the original project dies, proving that the people behind the original project has lost their sights and bet in the wrong direction. <br /> <br />In either case, for these to happen, we must have enough skilled people to back the fork. The larger the project, the more complex the project, the more skilled people must revolt and stand behind the fork. It's a game of numbers; if you don't have the numbers you lose. Even if you're super smart, you only have 24 hours a day so chances are you can never single-handedly fork a large-scale project. <br /> <br />In other words, "" mantra does not always work in real world; in fact, it mostly doesn't. <br /> <br />Let's examine a few popular works and let's see how well they do. <br /> <br />1. LibreOffice (fork of OpenOffice). This is a successful fork, because most of the original developers of OpenOffice switch sides to LibreOffice. The original project is dying. <br /> <br />2. eglibc (fork of glibc). Same story as above. Eventually, the original "glibc" folds, and the eglibc fork is officially accepted as the new "glibc" by those who owns "glibc" name. <br /> <br />3. DragonflyBSD (fork of FreeBSD). Both the fork and the original survives; and they grow separately to offer different solutions for the same problem. <br /> <br />4. Devuan (fork of Debian). The fork has existed for about two years now, the judge is still out whether it will be successful. <br /> <br />5. libav (fork of ffmpeg). The fork fails; only Debian supported it and it is now dying. <br /> <br />6. cdrkit (fork of cdrtools). The fork fails; the fork stagnates while the original continues. <br /> <br />7. OEM Linux kernel (fork of Linux kernel). There are a ton of these forks, each ARM CPU maker and ARM boardmaker effectively has one of them. They are mostly failed; the fork didn't advance beyond the original patching to support the OEM. That's why so may Android devices are stuck at 3.x kernels. Only one or two are successful, and those who are, are merging back their changes to the mainline - and eventually will vanish once the integration is done. <br /> <br />8. KDE Trinity (fork of KDE). It's not a real fork per se, but more of a continued maintenance of KDE 3.x. It fails, the project is dying. <br /> <br />9. MATE desktop (fork of GNOME). Same as Trinity, MATE is not a real fork per se, but a continued maintenance of GNOME 2.x. I'm not sure of the future of this fork. <br /> <br />10. Eudev (fork of systemd-udev). The fork survives, but I'd like to note that the fork is mostly about separating "udev" from "systemd" and is not about going to separate direction and implementing new features etc. Its long-term survivability is questionable too because only 2 people maintain it. Plus, it is only used by a few distributions (Gentoo is the primary user, but there are others too - e.g. Fatdog). <br /> <br />11. GraphicsMagick (fork of ImageMagick). The fork survives as an independent project but I would say it fails to achieve its purpose: it doesn't have much impact - most people only knows about ImageMagick and prefers to use it instead. <br /> <br />I think that's enough examples to illustrate that in most cases, your fork will only **probably** survive if you have the numbers. If you don't, then the fork will either die off, or will have no practical impact as people continue to use the original project. <br /> <br />In conclusion: The mantra of "" is not as potent as you thought it would be. <br /> <br />As such, the direction of the a project is mostly set by committers. Different projects have different policies on how committers are chosen; but in many projects the committers are elected based on: <br />a) request by the project (financial) sponsor, and/or <br />b) elected based on meritocracy (=read: do-ocracy) - how much contribution he/she has done before. <br /> <br />But remember what I said about <a href=?viewDetailed=00172 target=_blank>do-ocracy</a>? Fatdog Update Fatdog64'Linux Well, I'm still here. I've been busy with life, moving houses, making arrangements, etc. Too much things to do, too little time. I wouldn't bore you with all that mundane things, since what most probably you're here for Fatdog. <br /> <br />Anyway. <br /> <br />Fortunately for all of us Fatdog64 lovers, it has not been so quiet for Fatdog64 under the hood. Our two new members, "SFR" and "step", have been busy at work - bug fixes, package updates, package rollback when the updates don't work :), package replacements, etc. You will find them in the Forum as well, helping other people. <br /> <br />I would say that recruiting them was the best decision we have done - the dynamics works well between us so discussion is always productive. <br /> <br />In fact, we're nearing a release now. To be accurate, however, we have been "near a release" for a few months now - there are so many changes we'd like to share with you; but there is always "one more thing we would to before release to make it better" - and the it's back to the kitchen <img src=images/smilies/teeth.gif />. So this release may happen soon or may be a bit later (or a lot later) - cross your fingers! <br /> <br />But seriously, all in all, things are looking good on Fatdog64 side. The team has done lots of exciting improvements. As usual, it may not be perfect, but there is a always the next release <img src=images/smilies/teeth.gif />. <br /> <br />It has not been so well on the ARM front. I'm really the only one who works on FatdogArm, and my lack of time to do anything with it means it gets left behind; and it shows. No new platform supported, packages not updated ... although, all in all, it still runs pretty well, for an aged OS. <br /> <br />Well, that's about it for now. On my other FOSS article, I have published two parts. It's actually a four-parter, so there are two more parts to publish ... I'll get that done very soon. <br /> <br />Cheerios everyone. Fatdog64 build recipes Fatdog64'Linux I've just uploaded the build recipes for all the official packages of Fatdog64. They are available <a href= target=_blank>here</a>. <br /> <br />They are tarballs, containing the recipe proper, and other supporting files such as patches, desktop files, icons, etc. <br /> <br />They have previously been available in the binary packages (every official Fatdog binary package contains the build recipe tarball inside); but to make it easier for people to search and re-use; we have decided to extract them and upload it in separate place. <br /> <br />The recipe itself is just a shell script, to be used with Fatdog's pkgbuild system. If you want to use it to build it as is, you need that build system which you can get from <a href= target=_blank>here</a>. Warning: only tested to work in Fatdog. However, if you just want to examine how the build is done; you can just look at the recipe - it's simple enough to understand. <br /> <br />Note: If you're already on Fatdog64, don't bother getting that. pkgbuild is already included as part of Fatdog's devx. <br /> <br />These build recipes will be updated from time to time; but I can't guarantee any "freshness" of any of these recipes. And oh, they come as totally unsupported - feel free to use them as you see fit, but the risk is all yours. And while I'd be glad to hear suggestion and/or patches for them; please don't come to me for support. My hands are already full of other things. Real-time Kernel for Fatdog64 710 Fatdog64'Linux I built and uploaded real-time kernel for Fatdog64. <br /> <br />It's based on Linux 4.4.52 - the latest as of today; and from the same branch as the 710 kernel (4.4.35); and one of the LTS (long-term-support) version; patched with 4.4.50-rt63 patches. <br /> <br />I could manage only the "Basic RT" (PREEMPT_RTB) configuration. This is somewhat between "low-lateny" and "fully preemptible" configurations. I tried the "fully preemptible" (PREEMPT_FULL) configuration but while it gave me a kernel binary; it didn't work satisfactorily --- too many lockups at too unpredictable times. <br /> <br />It has been a very long time since I built an RT kernel (the last one was probably around Linux 3.4 days) which can run in fully preemptible manner. The RT patches aren't always stable either; depending on the kernel version they can be good, okay, or just bad; so I suppose for today, this is the best I can get. <br /> <br />Apart from changing the pre-emption level to PREEMPT_RTB, I made two more (unrelated) changes: <br />- I increased timer frequency to 1000 Hz. <br />- I added SDA_HWDEP support. <br /> <br />The first change is done because I plan to use the RT kernel for some audio work that requires lower latency and higher timer resolution. <br /> <br />The second one is done because by tweaking the codec's amplifier I could make my laptop speaker louder by using <a href= target=_blank>HDA Analyzer</a> (which requires HDA_HWDEP support); but it turns out to be wishful thinking. <br /> <br />Anyway, enjoy. If you need a guide on how to use the new kernel, look <a href= target=_blank>here</a>. There is a new way to test kernels without having to do all above, but it hasn't been written yet. I'll write it when I have time (and motivation) - basically you use "extrasfs" boot parameter to load the kernel-modules.sfs instead of replacing the kernel modules inside your initrd. Fatdog64 is now listed in Distrowatch Fatdog64'Linux I have been notified of this for a while, but because of my other stuff I forgot to announce it here. <br /> <br /><a href= target=_blank>Distrowatch</a> is basically a site that monitors various Linux distributions and their updates; as well as news about what's new; what's coming up; and other interesting stuff about Linux distributions. If you haven't been there already, you should check it out. <br /> <br />Fatdog64 has been recommended to Distrowatch for a quite a while, languishing in the "submission queue" for years. Apparently this year is the year - we finally are listed there: <a href= target=_blank></a>. <br /> <br />Yay! <br /> <br /> How to destroy FOSS from within - Part 2 Linux'General This is the second installment of the article. In case you missed it, part one is <a href=/blog/?viewDetailed=00168 target=_blank>here</a>. <br /> <br /><hr> <br /> <br />In the past, companies try to destroy FOSS by disreputing them. This is usually done by hiring an army of paid shills - people who spread hoaxes, misinformation, and self-promotion where FOSS people usually hang around (in forums, blog comments), etc. This is too obvious after a short while, so the (slightly) newer strategy is to employ "unhelpful users" who hangs around the same forum and blog comments, pretending to help, but all they do is to shoot down every question by embarassing the inquirer (giving <i>"oh noobs questions, RTFM!"</i>, or <i>"why would you want to **<u>do that</u>**???"</i> type of responses, all the time). <br /> <br />Needless to say, all these don't always work (usually they don't) as long as the project is still active and its community isn't really filled with assholes. <br /> <br />In order to know how to destroy FOSS, we need to know how FOSS survives in the first place. If we can find lifeline of FOSS; we can choke them and FOSS will inevitably die a horrible death. <br /> <br />The main strength of FOSS is its principle of do-ocracy. Things will get done when somebody's got the itch do to it; and that somebody will, by virtue of do-ocracy, sets the direction of the project. <br /> <br />The main weakness of FOSS is its principle of do-ocracy. Things will get done when somebody's got the itch do to it; and that somebody will, by virtue of do-ocracy, sets the direction of the project. <br /> <br />The repeated sentence above is not a mistake, it's not a typo. Do-ocracy is indeed both the strength and the Achilles' heel of FOSS. Let's see this is the case. <br /> <br />Direction in an FOSS project is set by two groups of people: <br />a) People who work on the project, and <br />b) People who are allowed to work on the project. <br /> <br /><b><u>Lets examine (a).</u></b> <br /> <br />Who are the people who work on the project? They are: <br />1) People who are capable of contributing <br />2) People who are motivated to contribute <br /> <br /><hr> <br /> <br /><u>Let's examine (1).</u> <br />Who are the people capable of contributing? Isn't everyone equally capable? The answer is, though may not be obvious due to popular "all people is equal" movement, is a big, unqualified NO. People who are capable of contributing are people who have the skill to do so. Contribution in documentation area requires skilled writers; contribution artworks require skillful artists; contribution in code requires masterful programmers. If you have no skill, you can't contribute - however motivated you are. <br /> <br />The larger a project grows, the more complex it becomes. The more complex it comes, the more experience and skill is needed before somebody can contribute and improve the project. To gain more skill, somebody needs to invest the time and effort; and get themselves familiar with the project and or the relevant technology. Bigger "investment" means less number of people can "afford" it. <br /> <br />And this creates a paradox. The more successful a project becomes, the larger it becomes. The larger it becomes, the more complex it becomes. The more complex it becomes, the smaller the available talent pool. <br /> <br /><hr> <br /> <br /><u>Let's examine (2).</u> <br />People contributes to FOSS projects for many reasons, some are less noble than others. Example: <br />- School projects (including GSoC) <br />- Some does it out for "paying back" ("I used FOSS software in the past, now I'm paying it back by contributing"). <br />- Some does it for fame and want to show off their skils. <br />- Some does it just to kill time. <br />- Some does it for enhancing their resume (oh wow - look at the number of projects in my github account !!! (although most of them are forks from others ...)). <br />- Some does it because they are the only one who needs the feature they want, so they just get it done. <br />- Etc, the reasons are too numerous to list. But there is one **BIG** motivation I haven't listed above, and I'm going to write it down in a separate sentence, because it is worthy of your attention. <br /> <br />👉👉👉 Some does it because it is <span style=emr>their day job</span>; they are being paid to do so 👈👈👈 <br /> <br /><hr> <br /> <br /><u><b>What can we conclude from (1) and (2)?</b></u> <br />A larger, more complex project requires people with more skills. <br />More skills requires more investment. <br />More investment requires more motivation. <br />Motivation can be bought (=jobs). <br /> <br />Thus it leads to the inevitable that: the more complex a project becomes, the more chance that the people who are working on it are paid employees. And paid employees follows the direction of their employer. <br /> <br />In other words: a larger project has more chance of being co-opted by someone who can throw money to get people to contribute. <br /> <br />We will examine (b) in the next installment. Time flies Fatdog64'Linux Wow, it is now the third month of 2017. I haven't written anything for 3 months! <br /> <br />Well, things do get quiet during the holiday season; and as usual there are real-life issues that I need to take care of. <br /> <br />In between, things have happened. Fatdog64 is now featured on Distrowatch: <a href= target=_blank></a>, yay! <br /> <br />Also, we recruited new member, "step" from the Puppy Linux forum. Before joining, step is known as maintainers of a few programs used in Puppy Linux, such as gtkmenuplus, findnrun, and others. Welcome step! <br /> <br />Though this blog is quiet, the Fatdog development is not. It continues nicely in the background with comfortable pace: bug fixes, minor feature updates, etc. Bug fixes isn't always possible, but package updates are visible <a href= target=_blank>here</a>. Also checks out <a href= target=_blank>Fatdog contributed packages thread</a>. <br /> <br />On the other news, LFS 8.0 has been released and while it is tempting to conclude that Fatdog 800 will follow suit soon, it won't happen. <br /> <br />While 710 (which is based on LFS 7.5/CLFS 3.0) is getting older, it has no major problem as its program and libraries continue to be updated. Fatdog 700/710 has acquired a large number of third party contributed software and we plan to keep them usable for a foreseeable time to come, by supporting 700-series until at least the end of the year. There may be one or two more releases (720? 721? or 730?) but they will use the same base. <br /> xscreenshot is updated Linux'General <a href=/wiki/wiki.cgi/Xscreenshot target=_blank>xscreenshot</a>, my dead simple screen capture program for X11, gets a facelift. It can now capture screenshot with mouse cursors in it; and it can also capture a single window. Oh, and now the filenames are created based on timestamp, rather than just a running number. You can get the latest version from <a href=/wiki/main/files/xannotate-2016-10-21.tar.bz2 target=_blank>here</a>. Fatdog64 710 Final is released Fatdog64'Linux The final version of Fatdog64 710 has been released. A lot of improvements since the last Beta release in August 2016; whose details you can see in the <a href= target=_blank>Release Notes</a>. <br /> <br />You can also leave your feedback in the Puppy Linux forum, where we made our <a href= target=_blank>Announcement</a>. <br /> <br />Get it from the usual locations: <br /><a href= target=_blank>Primary site - (US)</a> <br /><a href= target=_blank> - European mirror</a> <br /><a href= target=_blank> - Australian mirror</a> <br /><a href= target=_blank> - European mirror</a> <br /> <br />It may take a while for the mirrors to update. How to destroy FOSS from within - Part I Linux'General Although I don't set the scope of this blog, from my previous posts it should be obvious that this is a technical blog. I rarely post anything which is non-technical in nature, here; and I plan to keep it that way. <br /> <br />But there have been things moving under the radar which, while in itself is not technical in nature, will affect technical people the most, and hit them the hardest. Especially people working in the FOSS, either professionally, or as hobby. <br /> <br />The blog post is too long to write in one go, so I will split this into a few posts. <br /> <br /><hr> <br /> <br />For many years, I have been under the silly belief that nothing, nothing, short of global-level calamity (the kind that involves extinction of mankind), can stop the FOSS movement. The horse has left the barn; the critical mass has been reached and the reaction cannot be stopped. <br /> <br />The traditional way companies have fought each other is by throwing money for marketing and fire sale; outspending each other until the other cave in and goes bankrupt. Alternatively, they can swallow each other ("merge and acquire"); and once merged they just kill the "business line" or "the brand". <br /> <br />But they can't fight FOSS like that. Most FOSS companies survive on support. You can acquire them (e.g. MySQL), and then kill them; but one can easily spring up the next day (e.g. MariaDB). You cannot use fire sale on software licensing continuously, because the price of FOSS software licensing is eventually $0, and you can't compete with "free", well, not forever. <br /> <br />I still remember the days that a certain proprietary software company threw their flailing arms up in the air in exasperation, for not being able to compete against FOSS. The only thing they could do was bad-mouth FOSS and keep talking about "quality", and "amateur", and "unprofessional" when it was obvious their own products and conducts was none the better either. <br /> <br />So I was a believer that money cannot stop FOSS. <br /> <br />And how wrong I turned out to be. <br /> Fatdog64 710 Beta Release Fatdog64'Linux In development for over 3 months, this beta release contains many fixes and improvements since the last Alpha release. It is the continuing journey towards Final, which we aim to make it happen soon. <br /> <br />During this beta period we are greatly helped by Jake SFR (from Puppy Linux forum) which contributes bug reports, bug fixes, and feature improvement patches; we were also helped by forum member step who, in addition to providing the bug report and patches, also maintains key Fatdog applications such as wallpaper-manager and findnrun, among others. The beta release would not be as good as it is were it not due to the effort of these two gentlemen. So our heartful thanks to them. <br /> <br /><a href= target=_blank>Release Notes</a> <br /><a href= target=_blank>Announcement</a> <br /> <br />Get it from the usual locations: <br /><a href= target=_blank>Primary site - (US)</a> <br /><a href= target=_blank> - European mirror</a> <br /><a href= target=_blank> - European mirror</a> <br /><a href= target=_blank> - Australian mirror</a> <br /> <br />It may take a while for the mirrors to update because ibiblio has been having problems recently. Android: detecting outgoing call pickup Android I've been very busy programming an app on Android lately. It's an emergency app - one that enables someone in distress to make an emergency call as well as reporting the situation to a monitoring server, which then notify pre-defined parties so they can take action to help. <br /> <br />It is a very interesting experience, and quite challenging. This is too much to tell in just one blog post, so I'll probably spread it over a few posts as time allows. Or I'll follow up with some articles. <br /> <br />To begin with, please remember two facts: Android, as a platform, is 9 years old as of now. It was also a platform originally designed to serve as base for smartphones. So, I expected that they would have straightened out all the kinks in it; and have good support for telephony functions. <br /> <br />It turns out that it doesn't. <br /> <br />For example - there is no function, or event, whatsoever, to detect that an outgoing call has been picked-up by the remote party. Sure, the Android's own phone application knows this, but this knowledge for some reason is not disseminated to others. In Android 5+ you can get this information, but only if you are a "system" app. Most applications are *NOT* system app, because, well, to be able to install as a system app, you need to root your device first. So this isn't a solution you can apply generally. <a href= target=_blank>Stack Overflow</a> is full of questions about this for many years, with no good answer until today. There isn't any improvement from Google as well to add this feature (I'm quite sure that a few Google engineers are watching Stack Overflow). <br /> <br />But I *need* this ability to detect remote pickup, because, well, in my application, if the outgoing call is not answered in a certain time, I would need to terminate the call and call another number. What to do? <br /> <br />I solved it by detecting the <a href= target=_blank>ring-back tone</a>. As long as the call has not been picked-up, the ring-back tone will be heard. If the ring-back tone is no longer heard after certain time (and call is still on-going), we can assume the call has been picked up. Booting your BIOS system via UEFI Linux'General In my previous post, I wrote about my exploration on running UEFI on BIOS based systems. The original motivation was to find "cure" to long boot time from USB flash drive, when initrd is large (like the case in Fatdog). I reasoned that since in many BIOS systems USB booting is done via hard-disk emulation (and thus it depends on the quality of the emulation), it would be better to run a firmware that recognises and is capable of booting from USB devices directly, without emulation. <br /> <br />I managed to get <a href= target=_blank>DUET</a> working on qemu, but it didn't work on some of my target systems. Another alternative that I explored is <a href= target=_blank>CloverEFI</a>, which is a fork of DUET. This worked better than DUET and it booted on systems where DUET wouldn't. However, I could not notice improvement on boot times. I haven't looked at DUET disk driver; I was hoping that it would provide a hardware UHCI/EHCI driver but probably doesn't - if it still depends on BIOS to access the USB via hard-disk emulation, then I've gained nothing. <br /> <br />So the initial objective can be considered as a failure. <br /> <br />However, come to think of it, I now have a better reason why you want to run UEFI on your BIOS system. When you run DUET, you are, essentially, "flashing" your BIOS and "upgrading" it with a newer UEFI firmware. While BIOS can do most of what UEFI can, there is one thing that it cannot do: it cannot boot from disk over 2TB in size†. This is not a hardware limitation, it is a consequence of applying a 36-year old design meant for 5 MB harddisk to today's world. With UEFI "update", you can format your disk using GPT and boots successfully from it. <br /> <br /><hr> <br />Note†: It is possible to format the disk using GPT and have BIOS boots from it. I even described the process on <a href=/wiki/wiki.cgi/BIOSBootGPT target=_blank>my own article</a>. That article, however, has a non-obvious limitation: the bootloader you use, must be capable of using the filesystem and booting the OS of your choice. The article was targeted for Linux users, thus syslinux was the chosen example and it would work beautifully. If, however, you want to boot other OS that syslinux doesn't understand, then you have to choose a different boot loader that: <br />a) can be booted by BIOS <br />b) understands GPT <br />c) can boot your OS of choice <br /> <br />In this case, booting GPT disk via DUET doesn't sound very unreasonable, considering that you've got more choice of UEFI bootloaders than non-UEFI ones for some specific OS. <br /> UEFI is the new DOS General As I was doing some reading about UEFI emulation on BIOS systems, I came across this interesting link: <a href= target=_blank></a>. In essence, that the linked page says, is that UEFI is essentially a clone of DOS. I'm inclined to agree. <br /> <br />This is why: the page elaborates and compares how (from end-user's perspective) they are essentially the same: there is a kernel (UEFI TSL and UEFI RT [explanation <a href= target=_blank>here</a>]), there is a command line interpreter (shellx64.efi); there is a standard executable binary format (.efi files, which is some sort of flat-mode PE/COFF [details <a href= target=_blank>here</a>]), there is a system library you can link to to build your own binaries (EDK - UEFI Dev Kit c.f. libc); and the fact that an .efi binary can do anything that you want it to do, just like a DOS program can. UEFI provider kernel-like services like handling input devices, manages text and graphical displays, manages filesystem (FAT32 - the successor to DOS' original filesystem of FAT). The shell is single-user just like COMMAND.COM. You can even extend its capability by installing "drivers" - filesystem drivers, network drivers, what what you. A 64-bit DOS with support for all modern hardware, here we come. What's not to like? <img src=images/smilies/happy.gif /> <br /> <br />If your system comes with BIOS, you can run UEFI firmware using DUET (Developers' UEFI Environment). DUET is basically UEFI firmware on a disk (or flash drive, or optical drive) that you can "boot" from your BIOS. Rod Smith (the author of rEFInd, popular UEFI boot manager) wrote about it <a href= target=_blank>here</a>. Once booted, DUET takes over the system and the whole system now acts as if it has an UEFI firmware. You can boot your UEFI-capable OS with it, or you can run shellx64.efi - welcome to UEFI DOS. <br /> <br />If your system already comes with UEFI firmware in ROM - that's the equivalent of having ROM DOS. Rejoice! <img src=images/smilies/happy.gif /> One bootx64.efi to rule them all Linux'General Barry recently blogged about <a href= target=_blank>gummiboot</a>, which contains an interesting link to a feature of gummiboot that I overlooked previously. Barry linked to a phoronix article, which linked to a <a href= target=_blank>blog post</a> from Harald. <br /> <br />TL;DR: gummiboot has a feature to build a single UEFI binary that contains Linux kernel, initrd, and the kernel command line. One UEFI file that contains the entire OS. <br /> <br />Yes, with this, you can have one bootx64.efi (bootloader) that actually contains the entire operating system (kernel, initrd, etc). While the idea is not new - Rob Landley pushed for ability to embed initrd into vmlinuz a long time ago - this is one step even better: embedding into the bootloader! <br /> <br />Why would we even bother? For one thing, it enables you to carry a stick with FAT32 partition in it, and a single file strategically located and named in /EFI/boot/bootx64.efi which contains the entire operating system for recovery and rescue purposes. It also means the return of boot-time virus - this time in the form of boot-loader virus (instead of boot-sector) from the days past if you are not careful. <br /> <br />Another thing is - if you run an embedded system with UEFI bootloader, after your OS are loaded entirely into the RAM, you can happily replace/upgrade your OS ("firmware") in one swop - there are no transactions needed to check if the bootloader update works ok, if the kernel update works okay, if the initrd works okay ... you just replace one file, if that one file update is okay (checkum matches etc) then all is good. <br /> <br />Harald has the code <a href= target=_blank>here</a>, but it's somewhat tied to Fedora and systemd. Here is the extracted code that does the actual magic. <br /><pre class=code><code class=highlight>#!/bin/sh <br />echo your kernel cmdline > cmdline.txt <br />objcopy \ <br /> --add-section .osrel=/etc/os-release --change-section-vma .osrel=0x20000 \ <br /> --add-section .cmdline="cmdline.txt" --change-section-vma .cmdline=0x30000 \ <br /> --add-section .linux="/path/to/your/vmlinuz" --change-section-vma .linux=0x40000 \ <br /> --add-section .initrd="/path/to/your/initrd" --change-section-vma .initrd=0x3000000 \ <br /> linuxx64.efi.stub "$1" <br /></code></pre> <br /> <br />The only catch is this - where does this "linuxx64.efi.stub" come from? <br /> <br />This EFI stub is built as part of the gummiboot bootloader. Gummiboot is "obsoleted" as its content are "absorbed" into systemd (and renamed to systemd-boot or something); but the code still exists and still works nicely here: <a href= target=_blank></a> - you just need to checkout one commit before the final one (the final commit deletes everything to persuade people to move to systemd-boot). <br /> <br />I tested this with Fatdog64's initrd, with and without basesfs in it. Without basesfs - I ended up with 61MB bootx64.efi. With basesfs, I ended up with 366MB bootx64.efi. Both works as expected when launched from qemu as long as I have 2GB of RAM or more. <br /> <br /> Fatdog64 FatdogArm double release Fatdog64'FatdogArm'Linux FatdogArm Beta4 is released. <br /><a href= target=_blank>Release Notes</a> <br /><a href= target=_blank>Downloads</a> <br /> <br />Fatdog64 710 alpha is released. <br /><a href= target=_blank>Release Notes</a> <br /><a href= target=_blank>Forum announcement</a> <br /><a href= target=_blank>Downloads</a>. <br /> <br />As usual you can find them on ibiblio's mirrors too: <br /><a href= target=_blank></a>, <a href= target=_blank></a>, <a target=_blank></a> <br /> <br /> FatdogArm on Odroid-XU4 FatdogArm'Linux'Arm A kind gentleman who goes by the nickname of "pjf" on Puppy Linux forum gave me an Odroid-XU4 to play with. <br /> <br />As a result, FatdogArm now supports Odroid-XU4. The support is pre-eliminary - it's mainly kernel supports; the usual hardware acceleration stuff aren't supported yet. <br /> <br />The kernel package for Odroid-XU4 (and also XU3 - these two are software-compatible with each other) is here: <a href= target=_blank></a>, or on any of ibiblio's mirrors. <br /> <br />This work is still in flux, though, as I'm still tweaking the kernel. It's now my 3rd compile. It may change. And while you can find beta4 SFS there, I haven't really released it and I may still make some changes soon. <br /> <br />Odroid-XU4 is a little machine that could. The SoC is Samsung Exynos 5422. It comes with 8 cores (ARM BIG.little architecture - 4x 2GHz cores, and 4x 1.7GHz cores), <b>*ALL(</b> of which can be run simultaneously under HMP mode. <br /> <br />It is so powerful that: <br />--- <br />a) I can decode h.264 1080p video and render it to 1080p, with AAC audio, using software only (no hardware acceleration). <br />b) I can watch youtube using html5, with audio, without any stutter. <br /> <br />I've never been able to do this in my previous board. It is the fastest little board I've ever used, bar none. <br /> <br />It comes with a cost though. All those speed doesn't come from nothing. It is no longer fanless like Odroid-U2; XU4 comes with a fan. The power supply now can provide juice up to 4A - but this probably to support 2 USB3 ports on its board too. <br /> <br />PS: I used 3.10.y kernel. The 4.2 kernel is marked as EXPERMENTAL by hardkernel and currently lacks many of the fine improvements you can find in 3.10, like sound, and HMP support. It was created to be able to boot Debian server, but I find that even for server work, this is no good because it lacks the ability to schedule the cores properly. And furthermore, it seems to have been abandoned (last update is Aug 2015, while 3.10 is updated just a week ago). <br /> <br />PPS: An oh, on the same URL, you can find official Raspi2 kernel packages too. This is my official kernel packages, which I built from source, and of which I can also supply its kernel source SFS. The berryboot-based kernel are still available in its old link but I'm going to take it down soon. Can a FOSS contributor retracts his/her contributions? General Another aspect of <a href= target=_blank>Rage-quit: Coder unpublished 17 lines of JavaScript and “broke the Internet”</a> is from the comments I've read on-site: is it okay for a FOSS contributor to retract his/her contribution from a public site? Some says yes (contributor has rights) and some says no (once open it is open forever). <br /> <br />I would think the answer is obvious, if we separate the contribution and the publishing. <br /> <br />An author of an FOSS contribution has full rights to his contribution - he can retract, remove, destroy, change, or even change the license of his work. There is no question about it. <br /> <br />But due to the nature of FOSS, once the contribution is published, anyone can take it and re-publish it (with attributions as needed). The original author has no say about it and can't demand that they be taken down; because when he/she published the code he/she gave the world irrevocable right to do just that. <br /> <br />That does not mean the author cannot revoke his/her work, of course they can. It's just that he can't demand that everyone else must also take down the copy of his/her work. <br /> <br />Now, when author publishes his/her work through a 3rd party, however, he/she has to obey the terms of this 3rd party publisher. Some will give the rights to retract and delete, some do not. The point is, the publisher must make the terms and conditions clear. <br /> <br />Github for example allows you to retract and delete anything you publish on it - no trace will be left on its site if you choose to remove your work. Facebook is at the opposite - although at the beginning they didn't make it clear, nowadays it is pretty obvious that while you can delete your account and logins, whatever you submit to Facebook will live forever, and they can even use it long after you've removed your account. You give them that rights when you join Facebook. If you don't agree - well, don't use the Facebook. Simple. <br /> <br />Now back to They should have made it clear that they allow (or disallow) contributors to remove their contributions; and the <b>stand by that</b>. If they allow authors to remove their contributions, people who use the service knows that anything on npmjs should be considered ephemeral and can disappear at anytime - thus they can take mitigative actions (or choose not to use the service at all). If they don't allow removals, authors who contribute to the service knows that anything they choose to publish through is perpetual and can then choose whether or not they want to contribute. But can't have it both ways - because in the end you will irritate both the authors, and the end users. <br /> <br /> Fatdog64 710 builds 32-bit/64-bit wine Fatdog64'Linux Fatdog64 710 passed its ultimate test this weekend: the ability to build <a href= target=_blank>wine</a> that supports running 32-bit and 64-bit Windows applications. <br /> <br />To support 32-bit Windows apps, wine must be built in 32-bit mode. To support 64-bit Windows apps, wine must be built in 64-bit mode. To build wine that can support both, it must be built in both 32-bit and 64-bit mode. That requires multilib support. <br /> <br />And that's the new, major feature of Fatdog64 710: Fatdog64 now supports multilib natively. Building wine in 32-bit and 64-bit mode is the final test that its multilib capability is complete, working, and correct. <br /> <br />Happy Easter everyone. Local copy anyone? General I just read this: <a href= target=_blank> <br />Rage-quit: Coder unpublished 17 lines of JavaScript and “broke the Internet”</a>. <br /> <br />There are too many interesting aspects to consider from the article, but the one that surprised me the most is this: somebody removed their contribution from a public repo, and everything broke? Really? Haven't anyone heard of "local copy"? Fatdog64 710 enters testing stage Fatdog64'Linux Fatdog64 710 is the next generation of Fatdog64. It is still part of 700 series but considered as another branch; because it has a new build system (both for system and user packages) as well has other infrastructure changes which I prefer not to disclose for now. It share the common base as 700 thus many software packages will be largely backward and forward compatible between 700/710 although some may not, due to the usage of many newer libraries in 710. <br /> <br />710 has been in the works for about a year, since the first 700 release went final, but it got stuck there as real-life priorities took over. Most recently, I have 710 ready for testing since early Feb this year but I had to postpone it because I need my laptop to be stable and can't affort running a test OS at that time. <br /> <br />Yesterday, however, I took the plunge and migrated my savedir to 710. The testing process has begun. Fatdog702 ISO re-uploaded Fatdog64'Linux Due to the CVE-2015-7547 scare that hits glibc recently, plus the fact that it is not easy to update glibc, I've decided to re-upload Fatdog64 702 that was uploaded a few days ago with a new set of ISO, devx, and nls SFS that contains a new patched glibc. <br /> <br />The CVE patch itself comes from the Debian team (since the official patch only applies cleanly to latest glibc - not to glibc 2.19 that 702 uses). Thank you Debian. <br /> <br />The new packages md5sums are as follows: <br />--- <br />c7bff729fc3a6100246020466e94e6af Fatdog64-702.iso <br />54cc4ef28741e9e9844ab6f5ca66d41c fd64-devx_702.sfs <br />975d127442a8a336ec14dc743d51ad61 fd64-nls_702.sfs <br /> FatdogArm on Raspiberry Pi 2 FatdogArm'Linux'Arm There were some interest in running FatdogArm on Raspberry Pi 2 (raspi2 for short). While FatdogArm will never run on the original Raspberry Pi for many reasons (the biggest one: unsupported ARMv6 architecture), raspi2 has a modern quad-core Cortex A7 (=ARMv7 architecture) CPU running at 900 MHz, and comes with 1GB RAM standard. Not stellar, but not bad either. <br /> <br />Thanks to the help of forum member "mories" and the berryboot 2.0 kernel/modules, it was possible to get FatdogArm on raspi2: <a href= target=_blank></a>. It's available here: <a href= target=_blank></a>. <br /> <br />But I could not test it myself since I didn't own a raspi2 myself. Now, a kind gentleman who prefers to remain nameless has given me a raspi2 board, together with a very nice cover. <br /> <br />Though I am very busy these days, this inspired me to get the basics going on. I've just built a kernel directly from Raspi's official kernel source distribution (branch 4.1.y). Together with the closed-source bootloader (also from Raspi's official firmware distribution), I've managed to get it to boot to desktop. <br /> <br />This is still a long way to get raspi2 to become a tier-1 supported platform (we need to configure various hardware acceleration modules), but at least it is now running under its own kernel. <br /> <br />When things is a bit more stable I'm going to prepare a proper raspi2 kernel package, replacing the berryboot-based package we have right now. And perhaps publish beta4. Fatdog64 ISO builder is released Fatdog64'Linux Fatdog64 ISO Builder is a tool to make custom Fatdog64 ISO. <br /> <br />It's similar to Puppy Linux "woof", except that this builder specifically builds from Fatdog64 self-built packages only. Since it works with Slackware-style packages (.txz), you may be able to tweak it to work from Slackware packages as well, though that has never been tested. <br /> <br /><a href= target=_blank>Announcement</a> Fatdog64 702 Final is released. Fatdog64'Linux After the planned two weeks of RC stage, 702 is finally released. <br /> <br /><a href= target=_blank>Release notes</a> <br /><a href= target=_blank>Forum announcement</a> <br /> <br />Get it as usual from <a href= target=_blank>ibiblio</a> or one of its mirrors: <a href= target=_blank>aarnet</a>, <a href= target=_blank></a>, and <a href= target=_blank></a>. <br /> Fatdog64 702rc is released Fatdog64'Linux Maintenance update, mainly fixes and a few updated packages. <br /> <br /><a href= target=_blank>Release notes</a> <br /><a href= target=_blank>Forum announcement</a> <br /> <br />Get it as usual from <a href= target=_blank>ibiblio</a> or one of its mirrors: <a href= target=_blank>aarnet</a>, <a href= target=_blank></a>, and <a href= target=_blank></a>. <br /> Updated kbstate and a2dp-alsa Linux'General I've updated <a href=/wiki/wiki.cgi/KbState target=_blank>kbstate</a> to detect multiple keys from multiple event devices at once, making usage a lot simpler. <br /> <br />I've also updated <a href=/wiki/wiki.cgi/BluezA2DP target=_blank>a2dp-alsa</a> to work correctly with Android devices; and improve it so that a2dp-buffer is no longer necessary; and fix the Makefile for newer gcc. It can now be used as "pass-through router" reliably. Updated savedir support on FAT Fatdog64'Linux "Save directory" (savedir for short) is way of persistence whereby the user-modified files are stored in a directory somewhere, as opposed to "savefile", in which they are stored to a big loopback-mounted file. <br /> <br />Savefile is very convenient and reliable method of persistence and it works across many different filesystems including networked, non-POSIX ones, because we can always choose the filesystem inside the savefile - usually one that is POSIX compatible. <br /> <br />However savefile has a minor irritation - you are limited by its size. Sure you can always resize it if it gets full, but it's a hassle. Savedir on the other hand doesn't have this limitation, but it must be located on a POSIX filesystem. Well not really, but if not, then you'll get a lot of odd behaviours. <br /> <br />Fatdog64 has supported savedir since version 620 (April 2013), this includes support for non-POSIX filesystems too such as NTFS and FAT. <br /> <br />The support for NTFS was upgraded in October 2015 to support true POSIX permissions made available from recent versions of ntfs-3g. NTFS is pervasive and is good compatibility filesystem for Windows OS, so this is an overdue update (although I personally still recommend that you use savefile on NTFS). <br /> <br />I've now upgraded the support for savedir on FAT as well, using <a href= target=_blank>posixovl</a>; this gives savedir on FAT some support for rudimentary POSIX features, such as permissions, device nodes, and fifos. <br /> <br />However using posixovl as the base on savedir isn't without problem. For one thing, it cannot be unmounted cleanly - so you must always run fsck at boot ("dofsck" will do this for you). On another front, posixovl emulation of POSIX on FAT isn't perfect, and you will sure notice some oddities. And the last point is - FAT is much more corruption-prone as compared to modern filesystems (including NTFS). But if you're happy to play with fire, then - yeah, why not? <img src=images/smilies/teeth.gif /> <br /> <br />As a bonus, I also make posixovl to work with CIFS too - so now you can enjoy network-based savedir with full POSIX features (plus some unwanted oddities, as I said above). <br /> <br />I've made the usage of posixovl for FAT and CIFS not obligatory. You can always fallback to old method of using FAT and CIFS directly - which will unmount cleanly, but you will have to live with the limitations of non-POSIX filesystems (e.g. all files turned into executables; permissions are lost, etc). Or of course, just use savefile <img src=images/smilies/happy.gif /> <br /> <br />This will be in the next release of Fatdog, whenever that will be. <br /> Updated article: New Apps on Old Glibc Linux'General Somebody asked me recently about my article, <a href=/wiki/wiki.cgi/NewAppsOnOldGlibc target=_blank>How to run new apps on older glibc</a>. He tried to follow the instructions in the article but encountered an error. <br /> <br />As it turns out, when I wrote that article I only wrote half of it. I planned to write the other half but other things took my attention and I forgot about it. <br /> <br />I have now updated it and written the complete steps as well as re-testing the steps again to make sure that it works. <br /> <br />So if you're running a new application that depends on newer glibc but you can't re-compile or upgrade your OS for whatever reason, you may want to look at that article again. Review of meteor General Not too long ago I was looking at alternative development tools for Android other than what Googld provides. I got quite interested in <a href= target=_blank>meteor</a>, which claimed itself as a "Javascript App Platform". I have written javascript since 1997 (since before it was called EcmaScript, since before DOM Level 1 was standardised); while not exactly a fan of the language, I can do things with it so it intrigued me. In the past I have also had fun with Aptana Jaxer (now defunct) which more or less did the same thing - without the Android part. <br /> <br />Most of it is what it says it is. Documentation works, tutorial works (which is more than what many products from large companies can offer!). The javascript works too, of course. <br /> <br />But I noticed something that I really don't like - it's blurring the line between server-side activities and client-side activities. Let me explain. <br /> <br />In meteor, scripts can be tagged to run in client (=browsers), server, or both. Round-trip latency is reduced or eliminated using transparent client-side caching (ie, your code doesn't need to know about it - generated plumbing code + embedded libs takes care of that). You're supposed to write stuff as if they run on clients; and only code server-side stuff when necessary (at least that's the impression I've got). <br /> <br />This is supposedly a very good thing - focus your development work on your requirements rather than the plumbing of the platform; and get stuff done quickly. <br /> <br />But it feels wrong to me. I would rather prefer an environment where I know (and can separate) what runs on the server, and what runs on the client. <br /> <br />For one, with this much close-coupling, when the plumbing stops working or starts leaking, I can imagine that debugging will extremely fun. <br /> <br />Another downer for me is the realisation that the app I make will be fully tied to this platform. The frontend (client-side) can only work with the backend (server-side) it is written with; there is no easy way to make a single server that services heterogeneous, multi-platform clients. <br /> <br />All these may still be acceptable if the end result of the (android) app is a single bundle that I can deploy as a standalone - but no, meteor doesn't work that way. The android app it created is basically just a webapp facade (bunch of html and js), and needs to connect to a remote server for it to *work*. The server-side stuff are not included in the APK. That means, if the (remote) server dies, the app is useless. <br /> <br />There are other concerns but they are relatively minor compared to the above, so with great disappointment I have to put it aside. It had so much potential in it. Fatdog64 lives on Fatdog64'Linux There has been no posts about Fatdog64 lately. But it does not mean that its development has stopped. On the contrary, it is still actively maintained. I've received a lot of help from Puppy Linux forum members such as SFR, step, and L18L, to mention a prolific few. <br /> <br />If you want to follow what has been updated recently, you can look at an overview of the changes since 701 release <a href= target=_blank>here</a>. <br /> <br />Also, recently somebody asked me what Fatdog could do, so I decided to write an article about it <a href=/wiki/wiki.cgi/FatdogIsVersatile target=_blank>here</a>. <br /> Puppy Linux Slacko 6.3.0 is released Linux'General Puppy Linux "Slacko" is the flagship Puppy Linux based on Slackware. <br /> <br />Mick has just released the latest and greatest version 6.3.0 of Puppy Linux Slacko, both 32-bit and 64-bit flavours (Slacko and Slacko64). <br /> <br />The Slacko64 is the first ever official (non-beta) release of 64-bit Puppy Linux, so it is exciting times! <br /> <br />Go grab and give them a test drive yourself, from <a href= target=_blank>Puppy Linux Slacko official homepage</a>. <br /> <br />Note: Fatdog64's 32-bit compatibility SFS is based on 32-bit Slacko 5.96 (beta version of Slacko 6.x). <br /> <br /> Bluetooth support for Cubox-i FatdogArm'Linux'Arm'Fatdog64 Bluetooth was the last feature of FatdogArm that wasn't working on Cubox-i (it works on the Nexus 7). The last time I looked on it was on April this year. My main problem was I always get the message "can't set hci protocol" near the end of firmware upload, when using the built-in hci driver with brcm_patchram_plus (similar message when using the external hciattach). <br /> <br />There were a lot of people who reported these, and only got the shrugs ... "works for me" type of replies. Most of the "solutions" to this problem concerns about variation of parameters to use on brcm_patchram_plus, as well various links to different versions of .hcd file dumps. However, most of the messages ended there. There were no confirmation whether or not the fix works, and whether there are possibly other causes. And no-one said anything about the kernel. <br /> <br />As it turns out, the kernel *was* the problem. The bluetooth host hardware in cubox-i is connected via MMC SDIO, using the serial interface. To support serial bluetooth devices correctly, the kernel needs BT_HCIUART_* to be enabled. The default defconfig from SolidRun 3.10 kernel did't enable these <img src=images/smilies/doh.gif />, and there were no notes whatsoever saying these configs are needed at all <img src=images/smilies/thumbdown.gif />. I have been using Solidrun's defconfigs (= manufacturer knows best, etc) - and badly beaten by it, wasting hours on unnecessary debugging <img src=images/smilies/cry.png /> <br /> <br />Curiously, SolidRun 3.14 kernel defconfig *does* have these enabled - so they *do* know. Why this isn't documented elsewhere - I have no idea. Go and ask them. <br /> <br />Anyway, as soon as the kernel is rebuilt, bluetooth works. I tested it by getting it paired and connected with a bluetooth speaker and bluetooth keyboard. Both works nicely. <br /> <br />I have integrated these findings into a package called imx6-bluetooth, and have uploaded it to the repo. However, it won't work unless you use a kernel with those configs enabled. <br /> <br />I'm going to upload a new kernel for cubox-i later. If you're interested to use it *now*, then leave me a message. <br /> <br />With this, the FatdogArm platform support for cubox-i is considered complete. MariaDB: Eat your cake and still have it General Some people say you can't make money with open source or Free software. But there are many exceptions to that. Red Hat is one of the most prominent exceptions. Well, MariaDB is apparently another one, and a special one at that. <br /> <br />You see, MariaDB is a fork of a software called MySQL. MySQL is a Free software (GPL licensed) that was developed by MySQL AB. In 2008, MySQL AB was sold to Sun Microsystems for US$1 billion (Sun was later bought by Oracle). MySQL AB held the original copyright of MySQL source code and that right was sold to Sun (along with other things like the name, trademarks, etc). <br /> <br />But being Free software, one can take MySQL source code, and "fork" it, i.e. make modifications to the source code and re-distribute both the modified code and binary programs for others to use - without any (financial) obligations to MySQL AB (or Sun or Oracle) as long as the original (GPL) license requirement is met. <br /> <br />"MariaDB" is one of such fork. To "support" and "maintain" (and also "promote") MariaDB, there is an organisation called MariaDB Corporation AB. This organisation has received many funding rounds from venture capital (VC) companies. We are not talking about $10,000 individual donation, or $500,000 kickstarter campaign; we're talking about US$20 million direct VC funding: the last round being in Feb 2015. <br /> <br />We all know that VC companies are not charities. They expect returns on the money they invested. In other words, they expect returns from the money they gave to MariaDB AB. The usual way to get this returns back is to wait for MariaDB AB to get sold to someone else (to the public by IPO, or to other larger companies through private deals). Depending on your VC math, that $20million funding translates to a company valuation between $200m to $2 billion - not too shabby at all. <br /> <br />With me so far? OK. The punchline: the person who created MySQL, MySQL AB, MariaDB fork, and MariaDB AB is the one and same person. He created MySQL and MySQL AB and sold it (in 2008). Not long after that (in 2009) he created MariaDB fork, and later on he also started MariaDB AB; and by the looking of it, MariaDB AB will probably get sold too sooner or later. <br /> <br />I don't know about you, but I feel this is a proof that you can indeed eat your cake for breakfast *AND* still have it (so you can eat it again for lunch - and perhaps still have it even after that, for dinner? <img src=images/smilies/happy.gif /> ). And this is only possible if you're doing Free software. <br /> Javascript "Promise" General No, Javascript isn't promising you anything. It's just an oddly named object in Javascript, which, despite its queer naming, is worth considering, especially if you are losing too much hair from doing a lot of async callbacks. <br /> <br />Explanation of what it is, why it is useful, how it works, and how to write your own implementation in 90-lines - all <a href=/wiki/wiki.cgi/JavascriptPromise target=_blank>here</a>. Small web browser Linux'General Since we are in the subject of small programs, is there are any small GUI web browser? Less than 50K, perhaps? I must be joking, right? <img src=images/smilies/teeth.gif /> <br /> <br />Well, you _<i>could</i>_ make a small web browser like that. Just make GUI shell that links in Yeah. That would work. May as well create a shell script that launches firefox. Hey, small browser in 512 bytes! <br /> <br />Seriously, can we have a small browser, without external dependency, that weight less than 500K (excluding the weight of the GUI toolkits)? <br /> <br />The answer is you can; and the key to that is <a href= target=_blank>libgtkhtml2</a>. This is a HTML 4.0, CSS2 compatible rendering engine that weighs less than 500K. Since it is small it makes sense to have this as part of the system library (it is used by the likes of Osmo, Claws mail, etc for example); and if you already have it as a system library then you can truly makes a browser with the size of less than 50K, linking in this library. <br /> <br />If you don't have it as system library, you can still link it statically and have final stripped executable that is less than 500K (the exact size depends on your compiler optimisation settings, etc). <br /> <br />I have made such a browser, and you can download the source <a href= target=_blank>here</a>. In Fatdog64, that has libgtkhtml2 by default, the binary size is really 38K. Linked in statically, with -Os, the binary size is about 350K (on x86_64 build). <br /> <br /><b>Note about libgtkhtml2 source</b>: as you can probably see from the link given, libgtkhtml2 is a dead project. That gnome site listed version 2.11.1 as its final version, but there is (or was) a newer version from gnome-svn (which had also long been defunct) - which, fortunately, has been preserved by the Yocto project <a href= target=_blank>here</a>. I took this version, applied as many forward patches I could find (mainly from the also defunct - and also preserved by Yocto), and added my own stability patches. This final copy of libgtkhtml2 of mine is located <a href= target=_blank>here</a>. <br /> <br /><b>Final note:</b> libgtkhtml2 is old. It <b>*will*</b> choke, hang or crash on newer CSS3 (and some CSS2.1) or HTML5 stuffs. It does not have Javascript. While its HTML parsing is not too bad (it uses libxml2's html parser - which *is* maintained), its CSS parsing is horrible - instead of a grammar-derived parsing, it uses ad-hoc string searches. I have fixed some of the low-hanging bugs but a lot more still lurks in it. So I strongly advise you against using it for general purpose web browsing - for that you can have <a href= target=_blank>netsurf</a>, <a href= target=_blank>links2</a>, or other excellent projects - and while they aren't as small as libgtkhtml2, they do work for modern Internet. <br /> <br />The only reason why I tried to resurrect this, is to use it as a small (local) help viewer for HTML contents - just like mdview, in my previous post. After all, you don't want a <a href= target=_blank>help viewer that links to multi-megabytes webkit libraries</a>, do you? <img src=images/smilies/teeth.gif /> <br /> <br /> mdview: a small, GTK-based markdown viewer Linux'General I am quite annoyed by help-viewer programs that are huge and pull out a lot of dependencies, sometimes a lot more than the main programs themselves. After all, their purpose in life is just to support the main program and to provide a convenient UI to view some pre-formatted text files. <br /> <br />Then I found that hardinfo has a very nice help viewer which is very under-utilised (because there is hardly any help documents in it). It supports direct viewing of markdown-formatted files (well, a subset of markdown), and it has *no* dependencies other than GTK. <br /> <br />After playing with it for a while I decided to detach it out from hardinfo, polish it a little bit, fixed a few bugs and added some more features, and now I have mdview, a 60K-sized help/markdown viewer. <br /> <br /><hr> <br /> <br />From the homepage: <br /> <br />mdview is a super light-weight, GTK-based markdown files viewer. It has no other dependencies other than GTK itself. It reads and displays text files in (a subset of) markdown format, and provide live links to other files as well as to the Internet. It is ideal for showing help files (its original purpose), user manuals, and other small set of hyperlinked markdown files. <br /> <br />Get it from here: <a href= target=_blank></a> Fatdog64 701 is released Fatdog64'Linux Maintenance update, mainly fixes and a few updated packages. New features including USB/bluetooth tethering, working bluetooth send/receive files, MTP browser, and Find'N'Run, and a few others. <br /> <br /><a href= target=_blank>Release notes</a> <br /><a href= target=_blank>Forum announcement</a> <br /> <br />Get it as usual from <a href= target=_blank>ibiblio</a> or one of its mirrors: <a href= target=_blank>aarnet</a>, <a href= target=_blank></a>, and <a href= target=_blank></a>. <br />