Skip to main content

Hello ! This is my blog powered by Known. I post articles and links about coding, FOSS, but not only, in French (mostly) or English (if I did not find anything related before).


Doing low cost telepresence (for under $200)

8 min read

With a friend, we recently started a project of building a project of low cost telepresence robot (sorry, link in French only) at our local hackerspace.

The goal is to build a robot that could be used to move around a room remotely, and stream audio and video in both directions. Our target budget is $200. We got a first working version (although it does not yet stream audio), and it is time for some explanations on the setup and how to build your own =) All the instructions, code and necessary stuff can be found at our git repo.

Screen capture

3D model

Basic idea

When taking part in a group meeting remotely, using some videoconference solution, it is often frustrating not being able to move around the room on the other side. This prevents us from having parallel discussions, and if the remote microphone is poor quality, we often do not hear clearly everybody speaking. Plus, someone speaking may be hidden by another speaker and many other such problems happen.

The goal was then to find a solution to do videoconferences (streaming both audio and video in both directions) and be able to move on the other side, to be able to see everyone and to come closer to the current speaker. Commercial solutions exist but they are really expensive (a few thousands dollars). We wanted to have the same basic features for $200, and it seems we almost achieved it!

Bill of Materials

The whole system is built around a Raspberry Pi and a PiCamera, which offer decent performances at a very fair price. The rest is really basic DIY stuff.

Here is the complete bill of materials:

Total: $140


  • We had to use a Raspberry Pi model 2 for the nice performance boost on this model. Even more important is the increased number of GPIOs on this model, with 2 usable hardware PWMs (provided that you don't use the integrated sound card output). This is useful to control the two wheels with hardware PWM and have a precise control of the move. The camera holder can be safely controlled with a software PWM and we did not experience any troubles doing so.
  • You can easily replace those parts by equivalent ones as long as you keep in mind that the battery pack should be able to provide enough current for the raspberry pi and the servos. We used standard USB battery packs for simplicity and user friendliness. However, they are more expensive than standard modelling lithium batteries and provide less current in general.
  • We had to use two battery packs. Indeed, the peak current due to the servos starting was too excessive for the battery pack and it was crashing the raspberry pi. Using two separate alimentation lines for the raspberry pi and the servos, we no longer have this problem and this solution is easier than tweaking the alimentation line until the raspberry pi stops freezing (which it may never do).

For the next version, we plan to add:

Total with these parts: $228


  • We used an HDMI screen as the official RaspberryPi screen uses most of the GPIOs pins, which we need. We decided to use bluetooth speakers as the integrated sound card was not usable as we were using the two hardware PWM lines for motion. This way, we have a speaker with a built-in microphone, which smaller than having the two of them separately.
  • The USB bluetooth adapter is impressively expensive, but it is the only one we found at the moment which we were sure would be compatible with Linux without any problems. Plus others adapters we found were not much cheaper.
  • The total budget is $223 without shipping. It is a bit over the initial budget goal, but we can easily lower it to $200. Indeed, we did not especially look for the cheaper parts. In particular, we bought the servos from Adafruit and I think we can find some servos for less (especially the camera holder servo, which can be a micro servo at $5 and should be enough). The bluetooth adapter is quite expensive as well and we could find a cheaper one I think. Budget shrinkage will be our next goal, once we have everything working.

Building the robot

All the necessary stuff is in our git repo (or its github mirror, both should be kept in sync). The repo contains three main directories: - blueprints which are the models of the robot. - disty which is the main server code on the Raspberry Pi. - webview which is the web controller served by the Raspberry Pi.

First of all, you should cut the parts and print the 3D parts in the blueprints dir. eps files in this directory are ready to cut files whereas svg files should be the same ones in easily editable format. You should laser cut the top and bottom files. picam_case_* files are the camera case we used,

You should 3D print:

  • the picam_case_* files for the camera case (licensed under CC BY SA).
  • teleprez.blend is the complete CAO model of the robot in Blender.
  • camera_servo_holder.stl is the plastic part to hold the camera servo. You need to print it once. wheel_servo_holder.stl is the plastic part to hold the servos for the wheels. You need four of them.

Assembling your Disty robot should be straightforward and easy to do if you look at the following pictures :) Use two ball transfer units to stabilize the robot and lock them with some rubber band (or anything better than that). Adjust tightly the height of the wheels so that the two wheels and the ball transfer units touch the ground.




GPIO pinout for the connection can be found at

GPIO pinout

For the electrical wiring, we used a standard USB-Micro USB cable to power the Raspberry Pi from one battery (located below the robot, to add weight on the ball transfer units and ensure contact is made with the surface). On the other battery, we just cut a USB - Micro USB cable to plug into it and connect the servos directly through a piece of breadboard to the battery. We had to use two batteries to prevent the draw from the servos to reboot the Raspberry Pi.

Here you are, you have a working Disty!

Running it

This may not be super user-friendly at the moment, we hope to improve this in the future.

Download any Linux image you want for your Raspberry Pi. Install uv4l and the uv4l-webrtc component. Enable the camera and ensure you can take pictures from the command line (there is a lot of doc) about this on the web.

Then, clone the Git repo somewhere on your Raspberry Pi. You should build the main disty code (which is the serverside code). This code will handle the control of the servos (emit PWMs etc) and listen on UDP port 4242 for instructions sent from the webview. Instructions to build it are located in the associated README. You will need cmake and a system-wide install of wiringpi to build the code.

You can then start the robot. Start by launching the disty program (as root as you need access to the GPIOs), ./disty, and then start the webview, ./ as root also as it serves the webview on port 80, which is below 1024 and owned by root. If you have ZeroConf on your Raspberry Pi (or a decent router), you can go to http://disty (or whatever hostname is set on your Raspberry Pi) to get the webview. Else, use the IP address instead. Webview usage should be almost straightforward.

It should work out of the box on your local LAN. If you are behind a NAT, it will need some black magic (which is implemented but may not be sufficient) to connect the remote user and Disty camera. In any case, you need to be able to access the webview (disty port 80) from the remote side.


All contributions and feedbacks are more than welcomed!

All the source code we wrote is under a beer-ware license, under otherwise specified.

* --------------------------------------------------------------------------------
* "THE BEER-WARE LICENSE" (Revision 42):
* Phyks and Élie wrote this file. As long as you retain this notice you
* can do whatever you want with this stuff (and you can also do whatever you want
* with this stuff without retaining it, but that's not cool...). If we meet some
* day, and you think this stuff is worth it, you can buy us a beer
* in return.
*                                                                       hackEns
* ---------------------------------------------------------------------------------

If you need a more legally valid license, you can consider Disty to be under an MIT license.

Some sources of inspiration and documentation


Devops tools for workstations

5 min read

There is a growing interest in devops tools, such as Docker, Puppet / Ansible / Salt / Chef to set up continuous integration, have the same working environment during development, testing, staging and production, and to manage thousands of servers in the cloud. However, I recently thought that some of these technologies could have a real interest as well for a few machines, for personal workstations. These are just some scenarios and use cases that happened to me, I did not test all of them and they may be irrelevant, feel free to let me know :)

Scenario 1: I want a particular dev environment, without breaking everything

The first scenario is the most widely discussed around the web: I want my dev environment to be identical to production, but I do not want to break my system (python 2 vs 3, ruby stuff, …). For this purpose, one can use containers (e.g. Docker). I will not elaborate much on this one.

Note: I recently discovered that systemd had similar features through systemd-nspawn. In particular, see the articles on Systemd from Lennart Poettering to know more about this and systemd features.

Scenario 2: I use many different operating systems

Note: I did not tried this scenario.

If you use many different operating systems on the same machine (say Windows and Linux for instance), then why not considering using virtualization? And not some palliative stuff like Virtualbox, but real hypervisors such as KVM or Xen. These are widely used on servers to run multiple VMs, but why not using them at home, on your personal computer?

CPUs today have a lot of virtualization-specific technologies, even on your workstation, and they are powerful enough to handle it. You will benefit from many advantages, starting from easy maintenance of your system (as it is just a VM, it is easier to backup / restore, isolate etc) and things like hot-switching between operating systems.

One problem may be left: handling the graphic cards, which may not be easily shared by multiple VMs.

Scenario 3: You want an easy backup mechanism

One important point I previously discussed is the ability to recover from any problem, hardware fault for example. It is important on a server, as downtime is a real problem, especially if you have few servers, but it is also a major concern on workstation. Especially as your laptop may fail at any time: it can experience basic hardware failure, it may fall and break, it may be stolen…

Then, it is important to be able to recover a fully working system fast. One often talks about data backup, and this is indeed the most important as you cannot recover lost data unlike a lost configuration (set of packages, state of configuration files, …). But this is not all. Reinstalling a system is a time-consuming task, and it is not really interesting.

Devops have came up with tools to deploy a given configuration on many servers, across the cloud. Those are Ansible, Puppet, Chef, Salt and so on. So, why not using them to deploy your computer configuration? If correctly managed, installing your laptop could be as easy as: partitioning the drive, bootstrapping the system (install base packages and set up an SSH access) and running Ansible (which is the most basic and fitted to this particular need) to reinstall everything else. Almost everything would be done automatically, perfect!

However, this requires to maintain a list of installed packages and associated files for Ansible to use, which can be a bit heavy on the run. Then, it could be interesting to have some way to ”blueprint” your system, to generate configuration descriptions from your existing system (as it is easier to install stuff on your system, tweak it and blueprint it after, than it is to do it tweaking Ansible configuration description and running it each time).

To achieve easy blueprinting, another solution is to use etckeeper to store all your files under /etc (as these are the only supposed to be modified by you, as /usr is the domain of your distribution and should not be modified) in a Git repository and keep a track of every changes in them. Restoring from etckeeper and list of installed packages (obtained with pacman) is quite easy and can even be done without Ansible.

On this particular subject of blueprinting, I wrote some Python script for Arch Linux (basically just a wrapper around good pacman commands), available here. It may not be perfect, but will give you a basis to blueprint your system.

Another interesting lead for this scenario is Btrfs which has nice snapshot abilities which can even be used over the network. This is something I did not test directly but I am really interested in seeing what it can do…


Scenario 4: Sync multiple computers

One last scenario is the following: I have three computers on which I work on an almost daily basis (my laptop, my desktop and another computer). Syncing files between them is quite easy (or at least achievable), but syncing configurations between them is much more difficult. In particular as the whole configuration should not be synced, as there is some device-specific configuration (fstab, LVM configuration, SSH host keys and stuff like that). But this problem is exactly the same as syncing multiple servers in the cloud, and is handled very well by Ansible. Plus Ansible lets you define a task and replicate only some of them on some machines and so on. Then, it is quite easy to synchronize completely multple computers to have the same work environment on all of them.



Breaking out of a chroot() padded cell

Chroot jails, what is it? How to break it? How to do it correctly?


Quick comparisons of solutions for 3D cross-platform (mobile) development

9 min read

I spent some time lately comparing available development toolkits for 3D games / apps on mobile platforms (mostly for hobby / indy apps). I do not want to have to adapt too much my base code depending on the target platform, and I was then looking at a toolkit to write most of the code once, and be able to build the app for various platforms on the market. My use case is: sufficient 3D engine (not necessarily a high end thing, just the basics to be able to write 3D apps decently. I consider Three.js as sufficient for my needs for instance). I am working on Linux, so the ability to dev on Linux would be a real plus. Of course, it should be as cheap as possible :) and finally, I am paying much attention at the EULA and licenses, as I do not want to force my users to send “anonymous” statistics and I do not want my app to need extra (and useless) permissions.

Note: I did not test these toolkits deeply, and I am just reporting here what I found while playing a bit with them and comparing the features, licenses and requirements. I only included toolkits that suit my needs and some missing toolkits may be missing just because they do not match my needs.

Note: I need to be cross-platform, which means I want to be able to target both Android, iOS and Firefox OS. Then, I need a WebGL export ability (for Firefox OS). Anyway, being able to have a WebGL export is a plus as this means you can build a webapp, which is interesting for my use cases.



First toolkit I had a look at was Cordova. It allows you to write pure web apps (using standard HTML / CSS / JS and providing some extra APIs to extend the available APIs, for bluetooth for instance) and to package them into native apps distributed through the market. What it does is basically adding a wrapper around your web app to render it outside a browser, using the offered web abilities provided by webviews on iOs and Android. Writing a web app is really super easy, and then having a first working prototype using Cordova is super fast. Cordova runs on Linux without any problems. Cordova is completely free of charge.

It works pretty well for 2D graphics and basic applications (but it needs some extra permissions as it uses a web view, even if it is not communicating over internet). But when it comes to 3D graphics, using WebGL, you will be in troubles. Indeed, the webview in Android 4.x is using an old version of Chrome, even if you install the latest Chrome on your mobile. Then, you will not be able to use WebGL as it is simply not supported, unless you use some hack to actually use a more recent Chrome version, using Crosswalk for instance. On iOS, this is even worse as WebViews prior to iOS8 do not support WebGL (and as far as I know, there is no alternative which will be both stable, reliable and will pass through the reviewing process of the appstore). This mean that you will not be able to target iPhones prior to (and including) iPhone 4, and iPad 1, which in my opinion is a real problem.

Unreal Engine

Second option is to use the Unreal Engine 4. It is a complete 3D game engine, including many tools to build 3D apps that you can deploy on many platforms (both desktop, web and mobile). You can code in C++ with it, and script using many visual tools. It includes dedicated APIs for advanced features such as Virtual Reality (VR) and may sound overkill.

You can dev using Windows and Mac (officially supported), but not Linux (at least, not officially). However, it seems that the dev editor can be installed on Linux at the cost of a bit of hacking, and this should be even easier to install in the near future as there seems to be a developping Linux community.

Unreal Engine charges you with 5% royalties passed the first 3k$.

Here are some relevant EULA fragments:

12. Hardware and Usage Data

You acknowledge that, as a default setting, the Engine Code will collect and send to Epic anonymous hardware and usage data from end users of Products. This functionality is used by Epic to improve the Engine Code. You may modify the Engine Code under the License to turn off that functionality in your Product, or you may include in your Product the capability for your end users to turn off that functionality in the Product.


You agree to keep accurate books and records related to your development, manufacture, Distribution, and sale of Products and related revenue. Epic may conduct reasonable audits of those books and records. Audits will be conducted during business hours on reasonable prior notice to you. Epic will bear the costs of audits unless the results show a shortfall in payments in excess of 5% during the period audited, in which case you will be responsible for the cost of the audit.

The second one is a standard one, as you have to pay royalties depending on your revenues. But the first one is really concerning, as by default Unreal Engine will track your users and send anonymous statistics (thus requiring extra and unneeded permissions and raising privacy concerns). However, once you are aware of it, you can freely modify your app to prevent this, according to the EULA, so this is not a big deal in the end.


Latest solution I found (used by Monument Valley for instance) is the Unity game engine. The personal license is sufficient in most of the cases, and is free to use up to 100k$ gross revenue. The dev tools are available on Windows and Mac, but there is no Linux version (and no hacky way to get it in Linux).

Here are some relevant fragments from EULA as well:

(c) users will be required to complete a user survey to activate the Software. Unity Pro users who are not eligible to use Unity Personal may not develop and publish Licensee Content for the iOS and Android platforms without purchasing the applicable Unity Pro Add-On Product license. Unity may monitor your compliance with and enforce these restrictions and requirements including but not limited to monitoring the number of downloads of your Licensee Content and any available revenue estimate data.

This one is not really clear, and I am not really sure of what it really implies. However, according to and, it seems to imply that statistics are sent and that you cannot avoid it, even in the pro version.

We also include certain device data collection in the runtime of the Software which is incorporated into the applications you create with the software. You should be sure that your privacy policy explains to your players the variety of technical information that is collected and shared with third parties like Unity.


Q: I play a game built with Unity software, what should I know?

A: Unity has probably collected some or all of the following information about your device: Unique device identifier generated from the device MAC/IMEI/MEID (which we immediately convert into a different number using a one way hash); IP address; Device manufacturer and model; the operating system and version running on your system or device; browser type; language; the make of the CPU, and number of CPUs present; the graphics card type and vendor name; graphics card driver name and version (example: "nv4disp.dll"); which graphics API is in use (example: "OpenGL 2.1" or "Direct3D 9.0c"); amount of system and video RAM present; current screen resolution; version of the Unity Player; version of the Unity Editor used to create the content; a number describing whether the player is running on Mac, Windows or other platforms; and a checksum of all the data that gets sent to verify that it did transmit correctly; application or bundle identification ("app id") of the game installed. Some Unity developers use Unity’s analytics and ad services which collect additional information. See FAQs on Unity Analytics and Unity Ads below.

Q: That seems like a lot of data, why so much?

A: We try to limit the collection of this information from any one player or device; however, certain operating systems do not permit us to note that the info has already been collected. This means that the data may be sent to Unity each time you start the game. We use the information to make decisions about which platforms, operating systems and versions of them should be supported by our game development software. We aggregate this data and make it available to developers at This data helps us improve our Services and helps developers improve their apps.

8. Your choices about Unity’s collection and use of your information

You always have the option to refrain from using the Service or to discontinue using the Service if you do not want information collected about you.

They also explictly says in the FAQ that there is no opt-out, and the anonymous stats are indeed browsable at In conclusion, contrary to Unreal Engine, I do not think you can easily prevent the engine from sending anonymous statistics, which is a pity in my opinion. Moreover, there are a number of threads talking about extra permissions required by Unity (such as network access to send the statistics) and there seems to be no way to not require those permissions and to still conform to the EULA: and


EDIT: For 2D graphics, this StackOverflow post might be interesting.

EDIT2: I also found EdgeLib, Emo but they do not support web export.

EDIT3: CryEngine does support also Linux, Windows, iOS and Android, but not the web, and it is very expensive (license at 10$ per month).

EDIT4: Gameplay3D is also an open-source toolkit that can be used for cross-platform dev, written in C++, but it does not support emscripten officially for JavaScript output (and they do not plan on supporting it).

, ,


Kivy: Cross-platform Python Framework for NUI Development

A framework to build cross-platform apps, with Python, running on Windows, Mac, Linux, Android and iOS. Looks a bit like a Python Cordova. I definitely should have a look at it :)


Bug 1035668 – Problematic support for Logitech M560 mouse

How to use a Logitech M560 with Linux. By default, buttons will not be mapped correctly and the mouse will emit keyboard events.


Personal review of the Lenovo Thinkpad T440

8 min read

I recently changed my laptop and bought a Lenovo T440. I used to have a Clevo W150ERQ (actually a LDLC Saturne, but LDLC is just rebranding Clevo's laptops). It was a really good laptop, but I bought it at a time when I when I was looking for a powerful computer more than a light notebook, and it was way too heavy. Plus as a side effect, it had a poor battery (3 hours battery life maximum) which I magically managed to maintain, thanks to much tweaks, and despite the battery having lost almost 50% of its original capacity. Finally, it was built around NVIDIA optimus technology, that was lacking serious Linux support.


So, my goals when looking for a new laptop were that it was small, lightweight (the previous Clevo was 3kgs), powerful enough for my needs (no need for a high end GPU as I'm not doing gaming on my laptop, but still some decent CPU, with built-in AES encryption capabilities), a large battery life, to be able to travel without worrying about finding a power plug and a large matte screen (no less than 14", and HD resolution). One that was fitting almost all my needs was the Lenovo T440, with some options (intel i5 instead of the i3 in the base model, and HD screen). I had time to play a bit with it, so I write some feedback, in case it can be useful to anyone. This feedback might be updated as time passes. Note that I won't comment anything related to Windows, as there are plenty of infos about this notebook running Windows on the web, and I never used it with Windows.

Windows refund

Unfortunately, no Windows refund is available with Lenovo (it came with Windows 8.something preinstalled). They would only refund the entire laptop =(


I did not want to spend too much on my laptop. Especially, I did not want to pay it more than 1k€, which is often the price for such laptops. I got mine with HD screen, the biggest external battery and Intel i5 for 800€ (with 200€ discount from Lenovo).

External aspect

The laptop is quite thin, and matte, which makes it look very elegant :) The charger is very slim, and the total weight is very reasonable, below 2kgs with extra battery and charger. The screen looks nice and comfortable, although it does not have very wide viewing angles (not to say they are rather narrow…). It's very bright and can be set at any level between 0 and 100%, with standard Linux tools such as xbacklight (good point compared to the previous Clevo which only had a few possible levels). It has an internal battery (25Wh) and an extra external battery can be used for more power.

It has standard ports, 2 USBs, a mini display port, a VGA port, an SD card reader and a true ethernet port. It only has one single port for the micro and the headphones.

Opening it

I took one of the cheaper models, and wanted to put an SSD I had from before inside, to avoid having mechanical drive in such a laptop. So, I had to open it, before anything else. Contrary to the other Lenovo laptops, this one is very slim and very compact. Then, there is no easy access to the components (direct access to memory chips or hard drives for instance). You have to take out the whole base cover to change any component inside. That may sound impressive, but is not very difficult to do (but will mean higher costs if you don't want to do it yourself). You just need a standard screwdriver, and there is no sticker about warranty being void if opened.

First, disable the internal battery in the BIOS. Then, the best way I found to do it is to remove the 8 screws (standard screw driver) below the laptop and unclipse the base cover, starting from the rear (below the external battery). Do not use any tool, but your nail, to prevent any damage to the base cover, and to easily remove it. Note that the screws cannot be fully removed from the base cover, and that you should not have to force at any time. If so, check that the screws are well unscrewed, and you still need to apply some force, try opening the other side and loop to this position.

I had never opened such a computer before, and it took me around 30 minutes to swap the hard drives.

For info, if you have a version without 3G modem, you have an M2 slot (42mm if I remember correctly) available to put an extra SSD. This slot may be already taken by the SSD cache in case you chose an HDD + SSD cache.

Support status under Linux

I run ArchLinux, so my remarks may not be applicable to other distributions. The laptop has a wiki page in the doc. It is for the variant with a touchscreen but most remarks are applicable. Once the necessary packages are installed, everything works just fine.

I chose not to fight with UEFI, and use the regular BIOS (BIOS emulation) instead. I just had to disable SecureBoot and choose it in the BIOS. The BIOS is really nicely designed, with many options (Fn is the leftmost key on the keyboard for instance, but you can swap Fn and Ctrl in the BIOS, which is very practical as I prefer this layout). I installed the Intel drivers for Xorg, the synaptics driver for the ClickPad and the iwlwifi drivers for the wifi card, and everything worked nice out of the box.

I use i3wm, so I did not have any mappings predefined for the function keys. All of them are recognized as Xf86 function tools, and I just needed a config line in my i3 config file to assign them to xbacklight calls to increase and decrease brightness. Same thing for basically all the other function keys, nothing difficult to notice there.

The backlight for the keyboard works also out of the box. It is either fully hardware or well supported under Linux, but I did not have to install anything to use it. There are two levels of lighting available.

Battery life

I did not do any power saving as of now, and the laptop is really impressive on this point! I was fearing that the preinstalled Windows may be overoptimized for the laptop, and that I would have a poor battery life on Linux, but that's not the case at all.

I have an internal 25Wh battery, and an external 75Wh battery. The laptop was marketed around 15 hours of battery life, and I must say that it's actually running 15 hours. It is consuming no more than 5W with Wifi connectivity (and browsing the web), my SSD, and backlight set to 10%. That's really impressive and Intel has made a very good job with their latest generation of core i5. I compiled some stuff (Python matplotlib), still with 10% backlight, and jumped up to 15W, no more. For similar performance (at least, similar impression), my previous Clevo was consuming around 40W.

Then, out of the box, with the external battery, one can easily work in continuous for between 10 and 15 hours. Tests say that it can run up to 28 hours in idle mode. Totally satisfied on this point!


Not much to say, it works. It is an integrated webcam so nothing particularly good or bad.


Not tested yet.

SD card reader

Not tested yet.

Audio quality

Not much to say. Not very bad, not exceptionally good. :)

Fan and temperature

The CPU in normal use is at around 40°C. The laptop does not get very hot and is very silent (I haven't heard the fan yet).


Tp_smapi is a tool to handle some ACPI calls for Thinkpad. It can (apparently) handle the HDAPS feature, to detect shocks and avoid damages to the hard drive. As I have a SSD, I do not use this feature.

A more interesting tool is tpacpi-bat available from the AUR, which allows you to set thresholds for charging and discharging the battery. This way, you can set thresholds at 40% and 80% to keep the battery in its best area, according to Lenovo advices.

This works really nice and is well documented in the Archwiki.


There is a pointing stick in the keyboard (but I'm not a huge fan of such pointing sticks). The touchpad is a bit tricky to handle correctly. Indeed, it is composed of a single sensitive area (without any physical buttons) and is a single button (the whole touchpad can be clicked, but as far as I know, it is a single button). Then, you have to map it correctly in Xorg, so that you define "Virtual buttons" and couple the position of your finger on the touchpad to the click event. This can be done quite easily and people have posted many configurations such as this one.

I did not have yet a fully working configuration for my use, and this needs some tweaks, but nothing really problematic, in my opinion. I also use syndaemon as it is very easy to hit the touchpad accidentally while typing.


This is a great laptop, in my opinion, with minor points that could be improved. But, it has a really good price / performance ratio. The battery life is really impressive, in particular and it works very well out of the box under Linux.

For a full test (more hardware and pure perfs than Linux compatibility and so on), here is one.


Balancer le son de ses hauts-parleurs sur le réseau

3 min read

J'ai un PC fixe et un portable, et je cherchais un moyen de balancer le son de mon portable sur les hauts-parleurs de bonne qualité branchés sur mon PC fixe, quand je suis sur le même réseau. Et en fait, c'est très simple à faire avec PulseAudio.

La première méthode, simple, qui marche partout

S'assurer d'avoir pulseaudio configuré sur ses ordinateurs, et installer paprefs. Lancer paprefs et dans l'onglet Multicast/RTP, cocher la case receiver sur le PC sur lequel les hauts-parleurs sont branchés, et la case sender sur l'autre.

Sur le PC qui envoie la musique (sender), vous avez le choix entre trois options, dont seulement deux nous intéressent : Send audio from local speakers (qui enverra tout le son local sur les hauts-parleurs distants) et Create separate audio device for Multicast/RTP (qui vous rajoutera une sortie son Multicast/RTP que vous pourrez utiliser ou non, par application).

Si vous n'avez pas de pare-feu et que vous êtes bien sur le même réseau, c'est tout ce que vous avez à faire !

Par contre, vous remarquerez vite que la qualité n'est pas top (au moins chez moi) : un bon FLAC d'un côté ressort vite comme un MP3 64k d'il y a quelques années de l'autre côté…

La deuxième solution, encore plus simple, qui marche mieux !

La deuxième solution consiste à utiliser les deux premiers onglets de paprefs : Network Access et Network Server.

Sur le PC qui envoie le son, cochez la case Make discoverable PulseAudio network sound devices available locally dans le premier onglet.

Sur le PC qui reçoit le son, cochez les trois premières cases (Activer l'accès réseau aux périphériques de son locaux).

Et c'est tout =) Vous aurez désormais les sorties audio de votre autre PC qui apparaîtront chez vous (par exemple dans Audio -> Périphérique audio dans VLC). Et pour le coup, plus aucun problème de qualité à signaler ! Testé en filaire, et aucun problème de débit / lag / son à signaler pour l'instant.

À noter cependant que chez moi, j'ai deux sorties qui sont disponibles, une appelée Audio interne… et l'autre appelée Simultaneous output to Audio interne…. Si j'utilise la deuxième, j'ai le son qui saute, et c'est inutilisable, mais la première fonctionne nickel.


Principalement un seul lien : Mais ils font tout à coup de ligne de commande et c'est en fait bien plus simple de passer par paprefs.



Utiliser son PC sous Arch pour connecter un Raspberry Pi à Internet

3 min read

J'ai un Raspberry Pi et mon portable sous Arch Linux, et je me promène pas mal avec les deux. Mais je n'ai pas toujours de routeur à disposition pour brancher les deux sur le même réseau et travailler facilement. Il est très simple de mettre en place en 5 minutes une configuration me permettant de connecter le Raspberry Pi sur mon portable, et de partager la connexion Internet issue du wifi de mon PC avec le Raspberry Pi. Comme ça, plus de problèmes, je peux bosser sur le Raspberry Pi n'importe où.

C'est parti !

Note : J'utilise cette configuration pour développer, et elle n'est donc pas forcément optimale et devrait sûrement être adaptée pour être utilisée en production.

Installation d'un serveur dhcp sur le portable

Commençons par installer un serveur dhcp sur le PC avec Arch, pour éviter de devoir saisir une adresse IP fixe sur le Raspberry Pi. Comme ça, on peut utiliser n'importe quelle image sans réfléchir, comme si on avait un routeur qui va bien.

Le plus simple est de suivre cette page de la documentation.

  1. On attribue une adresse IP fixe à l'interface ethernet (ici, attention à ce que ça ne rentre pas en conflit avec votre configuration réseau).
ip link set up dev enp4s0f2
ip addr add dev enp4s0f2
  1. Déplacer le fichier /etc/dhcpd.conf fourni par défaut vers /etc/dhcpd.conf.example pour pouvoir le modifier sereinement.

  2. Éditer le fichier /etc/dhcpd.conf. À titre indicatif, voici le mien :

option domain-name-servers,;

option subnet-mask;
option routers;
subnet netmask {

Je spécifie d'utiliser les serveurs DNS de Google (qui sont disponibles partout -si vous avez un serveur sur votre ordinateur, vous pouvez le mettre à la place), que le routeur est à l'adresse et que j'attribue des adresses dans la gamme

  1. Vous pouvez lancer le service dhcpd avec systemctl start dhcpd4. Je préfèrais restreindre l'interface sur laquelle le serveur DHCP tournait, pour ne l'utiliser que sur l'interface ethernet. Pour se faire, il suffit de suivre les instructions « Listening on only one interface - Service file » de la documentation Arch Linux.

Configuration du pare-feu et du noyau

Il faut ensuite configurer iptables et le noyau pour rediriger les paquets réseau vers le Raspberry Pi.

Pour ce faire,

iptables -A FORWARD -o wlp3s0 -i enp4s0f2 -s -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE

et on dit active le forwarding dans le noyau :

echo 1 | tee /proc/sys/net/ipv4/ip_forward

Le script qui va bien

Une fois le serveur dhcpd configuré, ce script permet de tout démarrer / arrêter.

Attention, le flush dans la fonction stop peut être trop brutal pour vous.

, ,


Synchroniser ses ordinateurs 1/2

9 min read

Étude des solutions disponibles

J'utilise quotidiennement au moins 2 ordinateurs : mon ordinateur portable et mon fixe. Tous les deux ont des gros disques durs (> 1 To) et je cherche depuis quelques temps à les synchroniser pour les utiliser de façon totalement transparente avec mes fichiers (et avec la configuration utilisateur de base, tel que mon .vimrc, mon thème pour rxvt-unicode, etc.).

J'ai également un gros disque dur externe, sur lequel je veux faire des backups complets réguliers de certains fichiers.

Je dois donc maintenir en permanence 3 disques durs synchronisés : celui de mon ordinateur fixe, de mon portable et mon disque dur externe. Faire ça à la main, c'est très long et fastidieux (et potentiellement compliqué). Je cherche donc un moyen d'automatiser tout ça proprement. De plus, j'ai un serveur dédié, avec de l'espace disque disponible, sur lequel je voudrais stocker une partie des sauvegardes, pour avoir un backup décentralisé en cas de pépin.

Les solutions à envisager doivent répondre aux critères suivants :

  • Pouvoir facilement choisir les dossiers et les fichiers à synchroniser et être capable de synchroniser de gros fichiers sans problèmes et dans des temps décents. Je veux synchroniser tout mon home entre mes deux ordinateurs et mon disque dur externe, mais je ne veux pas synchroniser toutes mes musiques et vidéos avec mon serveur, pour ne pas le saturer inutilement. J'ai pas mal de scripts déjà versionnés par Git également, que je veux pouvoir exclure facilement.
  • Proposer une solution de sauvegarde chiffrée (transferts chiffrés entre les postes et chiffrement de mes données sur mes serveurs).
  • Permettre une architecture décentralisée, pas besoin de repasser par mon serveur pour synchroniser mes postes quand ils sont sur le même réseau local.

J'ai également repéré trois solutions potentielles : Unison, Syncthing trouvé grâce à cet article de tmos et git-annex trouvé par cet article. (Je cherche bien sûr une solution opensource à installer sur mon serveur)


Unison est un programme de synchronisation de fichier multiplateforme, écrit en Ocaml. Il gère les conflits automatiquement si possible, et avec intervention de l'utilisateur si besoin. Il prend un grand soin à laisser les systèmes en état fonctionnel à chaque instant, pour pouvoir récupérer facilement en cas de problèmes. Par contre, le projet était un projet de recherche initialement, et avait donc un développement très actif. Ce n'est plus le cas, et le développement est beaucoup moins actif, comme expliqué sur la page du projet.

Autre limitation : il n'est possible de synchroniser que des paires de machines avec Unison. Cela veut dire que pour synchroniser mes 3 machines, je vais devoir utiliser une configuration en étoiles, en passant forcément par mon serveur. C'est pas top car mon portable et mon ordinateur fixe étant très souvent sur le même réseau, il peut être intéressant de s'affranchir du serveur dans ce cas, pour avoir des taux de transfert plus élevés.

Enfin, il semble assez non trivial d'avoir un chiffrement des données synchronisées, et ce n'est pas implémenté directement par Unison. Il faut donc rajouter une couche d'Encfs ou autre. Une discussion sur le forum archlinux (en anglais) évoque cette possibilité, et un post sur la mailing-list [Encfs-users] donne quelques détails supplémentaires.


(Je reprends les informations du site officiel et de l'article de Tom.)

Syncthing est écrit en Go. Toutes les communications sont chiffrées par TLS et chaque nœud est authentifié et doit être explicitement ajouté pour pouvoir accéder aux fichiers. Syncthing utilise donc son propre protocole, et sa propre authentification. Il est multiplateforme, a une très jolie interface et ne requiert aucune configuration particulière (il est censé fonctionner out of the box, en utilisant uPnP si besoin pour ne pas avoir besoin de mettre en place de translation de port).

Chaque machine peut échanger avec toutes les machines avec lesquelles il y a eu un échange d'identifiants. On peut donc très facilement choisir de construire une architecture centralisée ou décentralisée (dans le premier cas, toutes les machines n'auront que l'identifiant du serveur central, dans le deuxième cas toutes les machines auront les identifiants de toutes les autres).

Toute la configuration se fait par une jolie interface web, protégée par mot de passe. Vous pouvez partager chaque dossier comme bon vous semble, et vous pouvez même partager certains dossiers avec des personnes extérieures, à la dropbox, sans dropbox :). Et de par l'architecture du logiciel, il n'y a pas de serveur ni de client mais un seul logiciel qui tourne partout.

La jolie interface de Syncthing

Le code est disponible sur Github, le dépôt est actif et les tags sont signés.

Il satisfait donc a priori la plupart de mes besoins. Il faut juste que je trouve un moyen de chiffrer mes documents sur mon serveur (mes disques sont déjà chiffrés, mais je voudrais avoir un gros conteneur déchiffré à chaque synchronisation, et verrouillé après idéalement). Apparemment, c'est en cours de discussion.

Concernant la gestion des conflits, elle n'a pas l'air parfaite, comme le montre cette issue sur Github. Il semblerait que la politique actuelle soit newest wins ce qui peut causer des pertes de données (une copie de sauvegarde est peut être réalisée, car SyncThing peut versionner les fichiers, mais je ne suis pas sûr de ce point, à tester).


Git-annex est un programme permettant de synchroniser ses fichiers en utilisant Git. En fait, il est vu comme un plugin pour Git, et il va stocker les fichiers dans un dépôt Git, mais ne pas versionner leur contenu. Du coup, on évite de devoir versionner des gros fichiers et donc les problèmes habituels de Git avec des gros fichiers potentiellement binaires.

C'est certainement le programme le plus abouti des trois présentés ici, son développement est actif, il y a une communauté derrière (et du monde sur IRC !) et le développeur a réussi une campagne Kickstarter l'an dernier pour se financer pour travailler sur le logiciel pendant un an. Il a notamment implémenté un assistant web dernièrement, permettant de gérer ses synchronisations via une interface web fort jolie à la Syncthing, en s'affranchissant de la ligne de commande. Je pense quand même que Syncthing est plus user friendly.

Les possibilités du logiciel, listées sur la page du projet sont assez impressionnantes. Parmi les fonctionnalités avancées :

  • Possibilité d'avoir tous les fichiers listés partout, même si leur contenu n'est pas effectivement présent sur le disque. Du coup, git-annex sait où aller chercher chaque fichier pour nous aider à nous y retrouver avec plusieurs supports de stockage (plusieurs disques durs externes de backup par exemple)

  • Git-annex utilise des dépôts standards, ce qui permet d'avoir un dépôt toujours utilisable, même si Git et Git-annex tombent dans l'oubli.

  • Il peut gérer autant de clones qu'on veut, et peut donc servir à synchroniser selon l'architecture qu'on veut. Il est capable d'attribuer des poids différents à chaque source, ce qui veut dire que je peux synchroniser mes ordinateurs via mon serveur, et si jamais ils sont sur le même réseau local, je peux synchroniser directement sans passer par mon serveur.

  • Il gère plusieurs possibilités de chiffrement, pour chiffrer les copies distantes, sur mon serveur par exemple. Et il permet de chiffrer tout en partageant entre plusieurs utilisateurs. Il peut utiliser d'office un serveur distant tel qu'un serveur sur lequel on a un accès SSH (et les transferts sont immédiatement chiffrés par SSH du coup) ou encore un serveur Amazon S3.

  • Possibilité de partager des fichiers avec des amis en utilisant un serveur tampon. Ce serveur stocke uniquement les fichiers en cours de transfert et n'a donc pas besoin d'un espace disque considérable.

  • Il gère les conflits.

  • Il peut utiliser des patterns pour exclure des fichiers, ou les inclure au contraire. On peut faire des requêtes complètes telles que "que les MP3 et les fichiers de moins de N Mo".

  • Bonus : il peut être utilisé pour servir un dossier semblable à mon, que j'implémenterai sûrement du coup, pour permettre de cloner tout mon pub directement.

Plusieurs screencasts sont disponibles dans la documentation et un retour en français est disponible ici.

Ses points forts ont vraiment l'air d'être ses fonctions avancées et la possibilité de gérer finement la localisation des fichiers. Git-annex ne se contente pas de vous permettre de synchroniser des fichiers, mais aussi de les déplacer géographiquement, partager et sauvegarder.

Conclusion (temporaire)

J'ai repéré trois solutions envisageables pour l'instant : Unison, SyncThing et git-annex, par ordre de fonctionnalités. Les informations précédentes sont uniquement issues des articles, documentations et retours d'utilisateurs. SyncThing a l'air le plus user friendly de tous, et il a des fonctionnalités intéressantes et avancées. git-annex est clairement celui qui a le plus de fonctionnalités, mais il est donc également plus compliqué à prendre en main.

Après cette analyse, je pense donc partir sur git-annex pour mettre en place mes sauvegardes. Rendez-vous bientôt pour la deuxième partie de cet article, avec un retour complet sur ma procédure de synchronisation (dans quelques temps quand même… faut que je réfléchisse à mon truc et que je dompte git-annex =).


  • Après réflexion, utiliser le disque dur externe en plus fait beaucoup de redondance. Du coup, si je ne peux synchroniser qu'une partie de mes fichiers sur celui-ci, c'est encore mieux (par exemple que ma bibliothèque de musiques / films).

  • Une autre solution qui pourrait vous intéresser est Tahoe-LAFS qui distribue vos fichiers sur plusieurs serveurs de sorte qu'aucun serveur ne puisse connaître vos données, et que si un serveur est en panne, vous puissiez toujours les récupérer (par défaut il utilise 10 nœuds pour le stockage et ne nécessite que 3 nœuds pour reconstituer les données, si j'ai bien compris). Voir aussi cette vidéo sur rozoFS à PSES 2014, pour avoir une idée de comment cela fonctionne.

  • Un rsync basique ne me suffit pas, car je dois pouvoir gérer des modifications des deux côtés de la connexion à la fois, et pouvoir gérer les conflits.

, , ,