MacSlow made us a nice illustrative tool that shows the usefulness of social distancing:
https://macslow.org/epidemic-spread/
MacSlow made us a nice illustrative tool that shows the usefulness of social distancing:
https://macslow.org/epidemic-spread/
This guy (Aral Balkan) seems to know what he is talking about.
To make live backups of a guest on an ESXi host’s SSH UNIX shell, you need to utilize the fact that when a snapshot of a VMDK file gets made, the original VMDK file turns so called read-only. Releasing the locks that would otherwise withhold vmkfstools from creating a clone.
This means that if you make a snapshot that you can use vmkfstools of the non-snapshot VMDK files from which the snapshot was made.
Let’s get started scripting this.
GUEST="GUESTNAME" DISKS="$GUEST EXTRADISK" SRC=/vmfs/volumes/STORAGE/$GUEST DST=/vmfs/volumes/STORAGE/backup/$GUEST
First get the VmId:
VMID=`vim-cmd vmsvc/getallvms | grep $GUEST | cut -d " " -f -1`
Create a poor man’s backup snapshot on $GUEST:
vim-cmd vmsvc/snapshot.create $VMID backup poor-mans-backup 0 0
Create the clones of the non-snapshot VMDK files (the one without numbers after $DISK)
mkdir -p $DST for DISK in $DISKS; do vmkfstools -i $SRC/$DISK.vmdk $DST/$DISK.vmdk -d sesparse done
Now remove the snapshots from $GUEST:
vim-cmd vmsvc/snapshot.removeall $VMID
Now, copy the VMX file:
cp $SRC/$GUEST.vmx $DST/$GUEST.vmx
Alternatively you can use ghettoVCB which is a little program that does the same thing.
For a Jenkins environment I had to automate the creation of a lot of identical build agents. Identical up until of course the network configuration. Sure I could have used Docker or what not. But the organization standardized on VMWare ESXi. So I had to work with the tools I got.
A neat trick that you can do with VMWare is to write so called guestinfo variables in the VMX file of your guests.
You can get SSH access to the UNIX-like environment of a VMWare ESXi host. In that environment you can do typical UNIX scripting.
First we prepare a template that has VMWare guest tools installed. We punch the zeros of the vmdk file and all that stuff. So that it’s nicely packaged and quick to make clones from. On the guest you do:
dd if=/dev/zero of=/largefile bs=10M ; rm /largefile
On the ESXi host you do:
vmkfstools --punchzero /vmfs/volumes/STORAGE/template/DISK.vmdk
Now you can for example do this (on the ESXi host’s UNIX environment):
SRC=/vmfs/volumes/STORAGE/template DST=/vmfs/volumes/STORAGE/auto mkdir -p $DST/$1 # Don't use cp to make copies of vmdk files. It'll just # take ages longer as it will copy 0x0 bytes too. # vmkfstools is what you should use instead vmkfstools -i $SRC/DISK.vmdk $DST/$1/DISK.vmdk -d thin # Replace some values in the destination VMX file cat $SRC/TEMPLATE.vmx | sed s/TEMPLATE/$1/g > $DST/$1/$1.vmx
And now of course you add the guestinfo variables:
echo "guestinfo.HOSTN=$1" >> $DST/$1/$1.vmx echo "guestinfo.EXTRA=$2" >> $DST/$1/$1.vmx
Now when the guest boots, you can make a script to read those guestinfo things out and let it for example configure itself (on the guest):
#! /bin/sh HOSTN=`vmtoolsd --cmd "info-get guestinfo.HOSTN"` EXTRA=`vmtoolsd --cmd "info-get guestinfo.EXTRA"` if test "$EXTRA" = "provision"; then echo $HOSTN > /etc/hostname reboot fi
Some other useful VMWare ESXi commands:
# Register the VMX as a new virtual machine VIMID=`vim-cmd /solo/register $DST/$1/$1.vmx` # Turn it on vim-cmd /vmsvc/power.on $VIMID & # Answer 'Copied' on the question whether it got # copied or moved sleep 2 VMMSG=`vim-cmd /vmsvc/message $VIMID | grep "Virtual machine message" | cut -d : -f -1 | cut -d " " -f 4` if [ ! -z $VMMSG ]; then vim-cmd /vmsvc/message $VIMID $VMMSG 2 fi
That should be all you need. I’m sure we can adapt the $1.vmx file such that the question doesn’t get asked. But my solution with answering the question also worked for me.
Next thing we know you’re putting a loop around this and you just ‘programmed’ creating a few hundred Jenkins build agents on some powerful piece of ESXi equipment. Imagine that. Bread on the table and the entire flock of programmers of your customer happy.
But! Please don’t hire me to do your DevOps. I’ve been there before several times. It sucks. You get to herd brogrammers. They suck the blood out of you with their massive ignorance on almost all really simple standard things (like versioning, building, packaging, branching, etc. Anything that must not be invented here). Instead of people who take the time to be professional about their job and read five lines of documentation, they’ll waste your time with their nonsense self invented crap. Which you end up having to automate. Which they always make utterly impossible and (of course) non-standard. While the standard techniques are ten million times better and more easy.
Yesterday I fixed my Bestway Lay Z Spa. It gave the infamous E02.
Opening up the thing it was. Because in a video the guy explained about the water flow sensor being a magnetic switch I decided to try taking the sensor itself out of the component. Then I tried with a external magnet to get the detached switch to close. The error was gone and I could make the motor run without any water flowing. That’s probably not a great idea if you don’t want to damage anything. So, of course, I didn’t do that for too long.
However. When I reinserted the sensor into the component, and closed the valve myself, the ER02 error did still happen. I figured the magnet that gets pushed to the ceiling of the component was somehow weakened.
Then I noticed a little notch on it. I marked it in a red circle:
I decided to take a flat file and file it off. When I now closed the valve myself, I could just like with the magnet make the motor run without any water flowing.
I reassembled it all. Reattached the device to the bath tube. It all works. Warm water this evening! I hope there will be stars outside.
I said it before, and I say it again: get those national asses out of your EU heads and start a European army.
How else are you going to tackle Turkey, Syria and the US retreating from it all?
The EU is utterly irrelevant in Syria right now. Because it has no own power projection.
When I said “A European Army”, I meant aircraft carriers. I meant nuclear weapons (yes, indeed). I mean European fighter jets that are superior to the Chinese, American and Russian ones. I meant a European version on DARPA. I mean huge, huge Euro investments. I meant ECB (yes, our central bank) involvement in it all. To print money. Insane amounts of ECB backed Euro money creation to fund this army and the technology behind it.
I mean political EU courage. No small things. Super big, huge and totally insane amounts of investments: a statement to the world: The EU is going to defend itself the coming centuries, and it’s going to project military power.
I doubt it will happen in my lifetime.
I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.
The DIR examples are examples for various build environments on how to create a good project structure that will build libraries that are versioned with libtool or have versioning that is equivalent to what libtool would deliver, have a pkg-config file and have a so called API version in the library’s name.
Information on this can be found in the autotools mythbuster docs, the libtool docs on versioning and freeBSD’s chapter on shared libraries. I tried to ensure that what is written here works with all of the build environments in the examples.
libpackage-4.3.so.2.1.0, what is what?
You’ll notice that a library called ‘package’ will in your LIBDIR often be called something like libpackage-4.3.so.2.1.0
We call the 4.3 part the APIVERSION, and the 2.1.0 part the VERSION (the ABI version).
I will explain these examples using semantic versioning as APIVERSION and either libtool’s current:revision:age or a semantic versioning alternative as field for VERSION (like in FreeBSD and for build environments where compatibility with libtool’s -version-info feature ain’t a requirement).
Noting that with libtool’s -version-info feature the values that you fill in for current, age and revision will not necessarily be identical to what ends up as suffix of the soname in LIBDIR. The formula to form the filename’s suffix is, for libtool, “(current – age).age.revision”. This means that for soname libpackage-APIVERSION.so.2.1.0, you would need current=3, revision=0 and age=1.
In case you want compatibility with or use libtool’s -version-info feature, the document libtool/version.html on autotools.io states:
The rules of thumb, when dealing with these values are:
- Increase the current value whenever an interface has been added, removed or changed.
- Always increase the revision value.
- Increase the age value only if the changes made to the ABI are backward compatible.
The libtool’s -version-info feature‘s updating-version-info part of libtool’s docs states:
- Start with version information of ‘0:0:0’ for each libtool library.
- Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster.
- If the library source code has changed at all since the last update, then increment revision (‘c:r:a’ becomes ‘c:r+1:a’).
- If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
- If any interfaces have been added since the last public release, then increment age.
- If any interfaces have been removed or changed since the last public release, then set age to 0.
When you don’t care about compatibility with libtool’s -version-info feature, then you can take the following simplified rules for VERSION:
- SOVERSION = Major version
- Major version: increase it if you break ABI compatibility
- Minor version: increase it if you add ABI compatible features
- Patch version: increase it for bug fix releases.
Examples when these simplified rules are or can be applicable is in build environments like cmake, meson and qmake. When you use autotools you will be using libtool and then they ain’t applicable.
For the API version I will use the rules from semver.org. You can also use the semver rules for your package’s version:
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards-compatible manner, and
- PATCH version when you make backwards-compatible bug fixes.
When you have an API, that API can change over time. You typically want to version those API changes so that the users of your library can adopt to newer versions of the API while at the same time other users still use older versions of your API. For this we can follow section 4.3. called “multiple libraries versions” of the autotools mythbuster documentation. It states:
In this situation, the best option is to append part of the library’s version information to the library’s name, which is exemplified by Glib’s libglib-2.0.so.0 > soname. To do so, the declaration in the Makefile.am has to be like this:
lib_LTLIBRARIES = libtest-1.0.la libtest_1_0_la_LDFLAGS = -version-info 0:0:0
Many people use many build environments (autotools, qmake, cmake, meson, you name it). Nowadays almost all of those build environments support pkg-config out of the box. Both for generating the file as for consuming the file for getting information about dependencies.
I consider it a necessity to ship with a useful and correct pkg-config .pc file. The filename should be /usr/lib/pkgconfig/package-APIVERSION.pc for soname libpackage-APIVERSION.so.VERSION. In our example that means /usr/lib/pkgconfig/package-4.3.pc. We’d use the command pkg-config package-4.3 –cflags –libs, for example.
Examples are GLib’s pkg-config file, located at /usr/lib/pkgconfig/glib-2.0.pc
I consider it a necessity to ship API headers in a per API-version different location (like for example GLib’s, at /usr/include/glib-2.0). This means that your API version number must be part of the include-path.
For example using earlier mentioned API-version 4.3, /usr/include/package-4.3 for /usr/lib/libpackage-4.3.so(.2.1.0) having /usr/lib/pkg-config/package-4.3.pc
The linker will for -lpackage-4.3 typically link with /usr/lib/libpackage-4.3.so.2 or with libpackage-APIVERSION.so.(current – age). Noting that the part that is calculated as (current – age) in this example is often, for example in cmake and meson, referred to as the SOVERSION. With SOVERSION the soname template in LIBDIR is libpackage-APIVERSION.so.SOVERSION.
Without versioning you can’t make any API or ABI changes that wont break all your users’ code in a way that could be managable for them. If you do decide not to do any versioning, then at least also don’t put anything behind the .so part of your so’s filename. That way, at least you wont break things in spectacular ways.
Coming up with your own versioning scheme
Knowing it better than the rest of the world will in spectacular ways make everything you do break with what the entire rest of the world does. You shouldn’t congratulate yourself with that. The only thing that can be said about it is that it probably makes little sense, and that others will probably start ignoring your work. Your mileage may vary. Keep in mind that without a correct SOVERSION, certain things will simply not work correct.
In case of libtool: using your package’s (semver) release numbering for current, revision, age
This is similarly wrong to ‘Coming up with your own versioning scheme’.
The Libtool documentation on updating version info is clear about this:
Never try to set the interface numbers so that they correspond to the release number of your package. This is an abuse that only fosters misunderstanding of the purpose of library versions.
This basically means that once you are using libtool, also use libtool’s versioning rules.
Refusing or forgetting to increase the current and/or SOVERSION on breaking ABI changes
The current part of the VERSION (current, revision and age) minus age, or, SOVERSION is/are the most significant field(s). The current and age are usually involved in forming the so called SOVERSION, which in turn is used by the linker to know with which ABI version to link. That makes it … damn important.
Some people think ‘all this is just too complicated for me’, ‘I will just refuse to do anything and always release using the same version numbers’. That goes spectacularly wrong whenever you made ABI incompatible changes. It’s similarly wrong to ‘Coming up with your own versioning scheme’.
That way, all programs that link with your shared library can after your shared library gets updated easily crash, can corrupt data and might or might not work.
By updating the current and age, or, SOVERSION you will basically trigger people who manage packages and their tooling to rebuild programs that link with your shared library. You actually want that the moment you made breaking ABI changes in a newer version of it.
When you don’t want to care about libtool’s -version-info feature, then there is also a set of more simple to follow rules. Those rules are for VERSION:
- SOVERSION = Major version (with these simplified set of rules, no subtracting of current with age is needed)
- Major version: increase it if you break ABI compatibility
- Minor version: increase it if you add ABI compatible features
- Patch version: increase it for bug fix releases.
Not using libtool (but nonetheless doing ABI versioning right)
GNU libtool was made to make certain things more easy. Nowadays many popular build environments also make things more easy. Meanwhile has GNU libtool been around for a long time. And its versioning rules, commonly known as the current:revision:age field as parameter for -verison-info, got widely adopted.
What GNU libtool did was, however, not really a standard. It’s is one interpretation of how to do it. And a rather complicated one, at that.
Please let it be crystal clear that not using libtool does not mean that you can do ABI versioning wrong. Because very often people seem to think that they can, and think they’ll still get out safely while doing ABI versioning completely wrong. This is not the case.
Not having a APIVERSION at all
It isn’t wrong not to have an APIVERSION in the soname. It however means that you promise to not ever break API. Because the moment you break API, you disallow your users to stay on the old API for a little longer. They might both have programs that use the old and that use the new API. Now what?
When you have an APIVERSION then you can allow the introduction of a new version of the API while simultaneously the old API remains available on a user’s system.
Using a different naming-scheme for APIVERSION
I used the MAJOR.MINOR version numbers from semver to form the APIVERSION. I did this because only the MAJOR and the MINOR are technically involved in API changes (unless you are doing semantic versioning wrong – in which case see ‘Coming up with your own versioning scheme’).
Some projects only use MAJOR. Examples are Qt which puts the MAJOR number behind the Qt part. For example libQt5Core.so.VERSION (so that’s “Qt” + MAJOR + Module). The GLib world, however, uses “g” + Module + “-” + MAJOR + “.0” as they have releases like 2.2, 2.3, 2.4 that are all called libglib-2.0.so.VERSION. I guess they figured that maybe someday in their 2.x series, they could use that MINOR field?
DBus seems to be using a similar thing to GLib, but then without the MINOR suffix: libdbus-1.so.VERSION. For their GLib integration they also use it as libdbus-glib-1.so.VERSION.
Who is right, who is wrong? It doesn’t matter too much for your APIVERSION naming scheme. As long as there is a way to differentiate the API in a) the include path, b) the pkg-config filename and c) the library that will be linked with (the -l parameter during linking/compiling). Maybe someday a standard will be defined? Let’s hope so.
FreeBSD’s Shared Libraries of Chapter 5. Source Tree Guidelines and Policies states:
The three principles of shared library building are:
- Start from 1.0
- If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
- If there is an incompatible change, bump major number
For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.
I think that when using libtool on a FreeBSD (when you use autotools), that the platform will provide a variant of libtool’s scripts that will convert earlier mentioned current, revision and age rules to FreeBSD’s. The same goes for the VERSION variable in cmake and qmake. Meaning that with those tree build environments, you can just use the rules for GNU libtool’s -version-info.
I could be wrong on this, but I did find mailing list E-mails from ~ 2011 stating that this SNAFU is dealt with. Besides, the *BSD porters otherwise know what to do and you could of course always ask them about it.
Note that FreeBSD’s rules are or seem to be compatible with the rules for VERSION when you don’t want to care about libtool’s -version-info compatibility. However, when you are porting from a libtoolized project, then of course you don’t want to let newer releases break against releases that have already happened.
Nowadays you sometimes see things like /usr/lib/$ARCH/libpackage-APIVERSION.so linking to /lib/$ARCH/libpackage-APIVERSION.so.VERSION. I have no idea how this mechanism works. I suppose this is being done by packagers of various Linux distributions? I also don’t know if there is a standard for this.
I will update the examples and this document the moment I know more and/or if upstream developers need to worry about it. I think that using GNUInstallDirs in cmake, for example, makes everything go right. I have not found much for this in qmake, meson seems to be doing this by default and in autotools you always use platform variables for such paths.
As usual, I hope standards will be made and that the build environment and packaging community gets to their senses and stops leaving this into the hands of developers. I especially think about qmake, which seems to not have much at all to state that standardized installation paths must be used (not even a proper way to define a prefix).
Why is there there a difference between APIVERSION and VERSION?
The API version is the version of your programmable interfaces. This means the version of your header files (if your programming language has such header files), the version of your pkgconfig file, the version of your documentation. The API is what software developers need to utilize your library.
The ABI version can definitely be different and it is what programs that are compiled and installable need to utilize your library.
An API breaks when recompiling the program without any changes, that consumes a libpackage-4.3.so.2, is not going to succeed at compile time. The API got broken the moment any possible way package’s API was used, wont compile. Yes, any way. It means that a libpackage-5.0.so.0 should be started.
An ABI breaks when without recompiling the program, replacing a libpackage-4.3.so.2.1.0 with a libpackage-4.3.so.2.2.0 or a libpackage-4.3.so.2.1.1 (or later) as libpackage-4.3.so.2 is not going to succeed at runtime. For example because it would crash, or because the results would be wrong (in any way). It implies that libpackage-4.3.so.2 shouldn’t be overwritten, but libpackage-4.3.so.3 should be started.
For example when you change the parameter of a function in C to be a floating point from a integer (and/or the other way around), then that’s an ABI change but not neccesarily an API change.
In most projects that got ported from an environment that uses GNU libtool (for example autotools) to for example cmake or meson, and in the rare cases that they did anything at all in a qmake based project, I saw people converting the current, revision and age parameters that they passed to the -version-info option of libtool to a string concatenated together using (current – age), age, revision as VERSION, and (current – age) as SOVERSION.
I wanted to use the exact same rules for versioning for all these examples, including autotools and GNU libtool. When you don’t have to (or want to) care about libtool’s set of (for some people, needlessly complicated) -version-info rules, then it should be fine using just SOVERSION and VERSION using these rules:
- SOVERSION = Major version
- Major version: increase it if you break ABI compatibility
- Minor version: increase it if you add ABI compatible features
- Patch version: increase it for bug fix releases.
I, however, also sometimes saw variations that are incomprehensible with little explanation and magic foo invented on the spot. Those variations are probably wrong.
In the example I made it so that in the root build file of the project you can change the numbers and calculation for the numbers. However. Do follow the rules for those correctly, as this versioning is about ABI compatibility. Doing this wrong can make things blow up in spectacular ways.
qmake in the qmake-example
Note that the VERSION variable must be filled in as “(current – age).age.revision” for qmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1)
To try this example out, go to the qmake-example directory and type
$ cd qmake-example
$ mkdir=_test
$ qmake PREFIX=$PWD/_test
$ make
$ make install
This should give you this:
$ find _test/
_test/
├── include
│ └── qmake-example-4.3
│ └── qmake-example.h
└── lib
├── libqmake-example-4.3.so -> libqmake-example-4.3.so.2.1.0
├── libqmake-example-4.3.so.2 -> libqmake-example-4.3.so.2.1.0
├── libqmake-example-4.3.so.2.1 -> libqmake-example-4.3.so.2.1.0
├── libqmake-example-4.3.so.2.1.0
├── libqmake-example-4.la
└── pkgconfig
└── qmake-example-4.3.pc
When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):
$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config qmake-example-4.3 --cflags
-I$PWD/_test/include/qmake-example-4.3
$ pkg-config qmake-example-4.3 --libs
-L$PWD/_test/lib -lqmake-example-4.3
And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment).
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ echo -en "#include <qmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config qmake-example-4.3 --libs --cflags`
You can see that it got linked to libqmake-example-4.3.so.2, where that 2 at the end is (current – age).
$ ldd test.o
linux-gate.so.1 (0xb77b0000)
libqmake-example-4.3.so.2 => $PWD/_test/lib/libqmake-example-4.3.so.2 (0xb77a6000)
libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75f5000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb759e000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb7580000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73c9000)
/lib/ld-linux.so.2 (0xb77b2000)
cmake in the cmake-example
Note that the VERSION property on your library target must be filled in with “(current – age).age.revision” for cmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1. Note that in cmake you must also fill in the SOVERSION property as (current – age), so SOVERSION=2 when current=3 and age=1).
To try this example out, go to the cmake-example directory and do
$ cd cmake-example
$ mkdir _test
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=$PWD/_test
-- Configuring done
-- Generating done
-- Build files have been written to: .
$ make
[ 50%] Building CXX object src/libs/cmake-example/CMakeFiles/cmake-example.dir/cmake-example.cpp.o
[100%] Linking CXX shared library libcmake-example-4.3.so
[100%] Built target cmake-example
$ make install
[100%] Built target cmake-example
Install the project...
-- Install configuration: ""
-- Installing: $PWD/_test/lib/libcmake-example-4.3.so.2.1.0
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so.2
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so
-- Up-to-date: $PWD/_test/include/cmake-example-4.3/cmake-example.h
-- Up-to-date: $PWD/_test/lib/pkgconfig/cmake-example-4.3.pc
This should give you this:
$ tree _test/
_test/
├── include
│ └── cmake-example-4.3
│ └── cmake-example.h
└── lib
├── libcmake-example-4.3.so -> libcmake-example-4.3.so.2
├── libcmake-example-4.3.so.2 -> libcmake-example-4.3.so.2.1.0
├── libcmake-example-4.3.so.2.1.0
└── pkgconfig
└── cmake-example-4.3.pc
When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):
$ pkg-config cmake-example-4.3 --cflags
-I$PWD/_test/include/cmake-example-4.3
$ pkg-config cmake-example-4.3 --libs
-L$PWD/_test/lib -lcmake-example-4.3
And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):
$ echo -en "#include <cmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config cmake-example-4.3 --libs --cflags`
You can see that it got linked to libcmake-example-4.3.so.2, where that 2 at the end is the SOVERSION. This is (current – age).
$ ldd test.o
linux-gate.so.1 (0xb7729000)
libcmake-example-4.3.so.2 => $PWD/_test/lib/libcmake-example-4.3.so.2 (0xb771f000)
libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb756e000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb7517000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74f9000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7342000)
/lib/ld-linux.so.2 (0xb772b000)
autotools in the autotools-example
Note that you pass -version-info current:revision:age directly with autotools. The libtool will translate that to (current – age).age.revision to form the so’s filename (to get 2.1.0 at the end, you need current=3, revision=0, age=1).
To try this example out, go to the autotools-example directory and do
$ cd autotools-example
$ mkdir _test
$ libtoolize
$ aclocal
$ autoheader
$ autoconf
$ automake --add-missing
$ ./configure --prefix=$PWD/_test
$ make
$ make install
This should give you this:
$ tree _test/
_test/
├── include
│ └── autotools-example-4.3
│ └── autotools-example.h
└── lib
├── libautotools-example-4.3.a
├── libautotools-example-4.3.la
├── libautotools-example-4.3.so -> libautotools-example-4.3.so.2.1.0
├── libautotools-example-4.3.so.2 -> libautotools-example-4.3.so.2.1.0
├── libautotools-example-4.3.so.2.1.0
└── pkgconfig
└── autotools-example-4.3.pc
When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):
$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config autotools-example-4.3 --cflags
-I$PWD/_test/include/autotools-example-4.3
$ pkg-config autotools-example-4.3 --libs
-L$PWD/_test/lib -lautotools-example-4.3
And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):
$ echo -en "#include <autotools-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ g++ -fPIC test.cpp -o test.o `pkg-config autotools-example-4.3 --libs --cflags`
You can see that it got linked to libautotools-example-4.3.so.2, where that 2 at the end is (current – age).
$ ldd test.o
linux-gate.so.1 (0xb778d000)
libautotools-example-4.3.so.2 => $PWD/_test/lib/libautotools-example-4.3.so.2 (0xb7783000)
libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75d2000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb757b000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb755d000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73a6000)
/lib/ld-linux.so.2 (0xb778f000)
meson in the meson-example
Note that the version property on your library target must be filled in with “(current – age).age.revision” for meson (to get 2.1.0 at the end, you need version=2.1.0 when current=3, revision=0 and age=1. Note that in meson you must also fill in the soversion property as (current – age), so soversion=2 when current=3 and age=1).
To try this example out, go to the meson-example directory and do
$ cd meson-example
$ mkdir -p _build/_test
$ cd _build
$ meson .. --prefix=$PWD/_test
$ ninja
$ ninja install
This should give you this:
$ tree _test/
_test/
├── include
│ └── meson-example-4.3
│ └── meson-example.h
└── lib
└── i386-linux-gnu
├── libmeson-example-4.3.so -> libmeson-example-4.3.so.2.1.0
├── libmeson-example-4.3.so.2 -> libmeson-example-4.3.so.2.1.0
├── libmeson-example-4.3.so.2.1.0
└── pkgconfig
└── meson-example-4.3.pc
When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):
$ export PKG_CONFIG_PATH=$PWD/_test/lib/i386-linux-gnu/pkgconfig
$ pkg-config meson-example-4.3 --cflags
-I$PWD/_test/include/meson-example-4.3
$ pkg-config meson-example-4.3 --libs
-L$PWD/_test/lib -lmeson-example-4.3
And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):
$ echo -en "#include <meson-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib/i386-linux-gnu
$ g++ -fPIC test.cpp -o test.o `pkg-config meson-example-4.3 --libs --cflags`
You can see that it got linked to libmeson-example-4.3.so.2, where that 2 at the end is the soversion. This is (current – age).
$ ldd test.o
linux-gate.so.1 (0xb772e000)
libmeson-example-4.3.so.2 => $PWD/_test/lib/i386-linux-gnu/libmeson-example-4.3.so.2 (0xb7724000)
libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb7573000)
libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb751c000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74fe000)
libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7347000)
/lib/ld-linux.so.2 (0xb7730000)
Enough with the political posts!
Making libraries that are both API and libtool versioned with qmake, how do they do it?
I started a project on github that will collect what I will call “doing it right” project structures for various build environments.
With right I mean that the library will have a API version in its Library name, that the library will be libtoolized and that a pkg-config .pc file gets installed for it.
I have in mind, for example, autotools, cmake, meson, qmake and plain make. First example that I have finished is one for qmake.
Let’s get started working on a libqmake-example-3.2.so.3.2.1
We get the PREFIX, MAJOR_VERSION, MINOR_VERSION and PATCH_VERSION from a project-wide include
include(../../../qmake-example.pri)
We will use the standard lib template of qmake
TEMPLATE = lib
We need to set VERSION to a semver.org version for compile_libtool (in reality it should use what is called current, revision and age to form an API and ABI version number. In the actual example it’s explained in the comments, as this is too much for a small blog post).
VERSION = $${MAJOR_VERSION}"."$${MINOR_VERSION}"."$${PATCH_VERSION}
According section 4.3 of Autotools’ mythbusters we should have as target-name the API version in the library’s name
TARGET = qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}
We will write a define in config.h for access to the semver.org version as a double quoted string
QMAKE_SUBSTITUTES += config.h.in
Our example happens to use QDebug, so we need QtCore here
QT = core
This is of course optional
CONFIG += c++14
We will be using libtool style libraries
CONFIG += compile_libtool CONFIG += create_libtool
These will create a pkg-config .pc file for us
CONFIG += create_pc create_prl no_install_prl
Project sources
SOURCES = qmake-example.cpp
Project’s public and private headers
HEADERS = qmake-example.h
We will install the headers in a API specific include path
headers.path = $${PREFIX}/include/qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}
Here put only the publicly installed headers
headers.files = $${HEADERS}
Here we will install the library to
target.path = $${PREFIX}/lib
This is the configuration for generating the pkg-config file
QMAKE_PKGCONFIG_NAME = $${TARGET} QMAKE_PKGCONFIG_DESCRIPTION = An example that illustrates how to do it right with qmake # This is our libdir QMAKE_PKGCONFIG_LIBDIR = $$target.path # This is where our API specific headers are QMAKE_PKGCONFIG_INCDIR = $$headers.path QMAKE_PKGCONFIG_DESTDIR = pkgconfig QMAKE_PKGCONFIG_PREFIX = $${PREFIX} QMAKE_PKGCONFIG_VERSION = $$VERSION # These are dependencies that our library needs QMAKE_PKGCONFIG_REQUIRES = Qt5Core
Installation targets (the pkg-config seems to install automatically)
INSTALLS += headers target
This will be the result after make-install
├── include │ └── qmake-example-3.2 │ └── qmake-example.h └── lib ├── libqmake-example-3.2.so -> libqmake-example-3.2.so.3.2.1 ├── libqmake-example-3.2.so.3 -> libqmake-example-3.2.so.3.2.1 ├── libqmake-example-3.2.so.3.2 -> libqmake-example-3.2.so.3.2.1 ├── libqmake-example-3.2.so.3.2.1 ├── libqmake-example-3.la └── pkgconfig └── qmake-example-3.pc
ps. Dear friends working at their own customers: when I visit your customer, I no longer want to see that you produced completely stupid wrong qmake based projects for them. Libtoolize it all, get an API version in your Library’s so-name and do distribute a pkg-config .pc file. That’s the very least to pass your exam. Also read this document (and stop pretending that you don’t need to know this when at the same time you charge them real money pretending that you know something about modern UNIX software development).
I said it before, we shouldn’t finance the US’s war-industry any longer. It’s not a reliable partner.
I’m sticking to my guns on this one,
Let’s build ourselves a European army, utilizing European technology. Build, engineered and manufactured by Europeans.
We engineers are ready. Let us do it.
Merkel and Macron should use everything in their economic power to invest in our own European Military.
For example whenever the ECB must pump money in the EU-system, it could do that by increased spending on European military.
This would be a great way to increase the EURO inflation to match the ‘below but near two percent annual inflation’ target.
However. The EU budget for military should not go to NATO. Right now it should go to EU’s own national armies. NATO is more or less the United State’s military influence in Europe. We’ve seen last G7 that we can’t rely on the United States’ help.
Therefor, it should use exclusively European suppliers for military hardware. We don’t want to spend EUROs outside of our EU system. Let the money circulate within our EU economy. This implies no F-35 for Belgium. Instead, for example the Eurofighter Typhoon. The fact that Belgium can’t deliver the United States’s nuclear weapons without their F-35, means that the United States should take their nuclear bombs back. There is no democratic legitimacy to keep them in Belgium anyway.
It’s also time to create a pillar similar to the European Union: a military branch of the EU.
Already are Belgium and The Netherlands sharing military marine and air force resources. Let’s extend this principle to other EU countries.
I mean, look at the conversation we’re having right now. You’re certainly willing to risk offending me in the pursuit of truth. Why should you have the right to do that? It’s been rather uncomfortable.
— Jordan Peterson, 2018
From Tom LendackySubject [PATCH] x86/cpu, x86/pti: Do not enable PTI on AMD processors Date Tue, 26 Dec 2017 23:43:54 -0600 AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault. Disable page table isolation by default on AMD processors by not setting the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI is set. Signed-off-by: Tom Lendacky --- arch/x86/kernel/cpu/common.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index c47de4e..7d9e3b0 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -923,8 +923,8 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c) setup_force_cpu_cap(X86_FEATURE_ALWAYS); - /* Assume for now that ALL x86 CPUs are insecure */ - setup_force_cpu_bug(X86_BUG_CPU_INSECURE); + if (c->x86_vendor != X86_VENDOR_AMD) + setup_force_cpu_bug(X86_BUG_CPU_INSECURE); fpu__init_system(c);
With asynchronous commands we have typical commands from the Model View ViewModel world that return asynchronously.
Whenever that happens we want result reporting and progress reporting. We basically want something like this in QML:
Item { id: container property ViewModel viewModel: ViewModel {} Connections { target: viewModel.asyncHelloCommand onExecuteProgressed: { progressBar.value = value progressBar.maximumValue = maximum } } ProgressBar { id: progressBar } Button { enabled: viewModel.asyncHelloCommand.canExecute onClicked: viewModel.asyncHelloCommand.execute() } }
How do we do this? First we start with defining a AbstractAsyncCommand (impl. of protected APIs here):
class AbstractAsyncCommand : public AbstractCommand { Q_OBJECT public: AbstractAsyncCommand(QObject *parent=0); Q_INVOKABLE virtual QFuture<void*> executeAsync() = 0; virtual void execute() Q_DECL_OVERRIDE; signals: void executeFinished(void* result); void executeProgressed(int value, int maximum); protected: QSharedPointer<QFutureInterface<void*>> start(); void progress(QSharedPointer<QFutureInterface<void*>> fut, int value, int total); void finish(QSharedPointer<QFutureInterface<void*>> fut, void* result); private: QVector<QSharedPointer<QFutureInterface<void*>>> m_futures; };
After that we provide an implementation:
#include <QThreadPool> #include <QRunnable> #include <MVVM/Commands/AbstractAsyncCommand.h> class AsyncHelloCommand: public AbstractAsyncCommand { Q_OBJECT public: AsyncHelloCommand(QObject *parent=0); bool canExecute() const Q_DECL_OVERRIDE { return true; } QFuture<void*> executeAsync() Q_DECL_OVERRIDE; private: void* executeAsyncTaskFunc(); QSharedPointer<QFutureInterface<void*>> current; QMutex mutex; }; #include "asynchellocommand.h" #include <QtConcurrent/QtConcurrent> AsyncHelloCommand::AsyncHelloCommand(QObject* parent) : AbstractAsyncCommand(parent) { } void* AsyncHelloCommand::executeAsyncTaskFunc() { for (int i=0; i<10; i++) { QThread::sleep(1); qDebug() << "Hello Async!"; mutex.lock(); progress(current, i, 10); mutex.unlock(); } return nullptr; } QFuture<void*> AsyncHelloCommand::executeAsync() { mutex.lock(); current = start(); QFutureWatcher<void*>* watcher = new QFutureWatcher<void*>(this); connect(watcher, &QFutureWatcher<void*>::progressValueChanged, this, [=]{ mutex.lock(); progress(current, watcher->progressValue(), watcher->progressMaximum()); mutex.unlock(); }); connect(watcher, &QFutureWatcher<void*>::finished, this, [=]{ void* result=watcher->result(); mutex.lock(); finish(current, result); mutex.unlock(); watcher->deleteLater(); }); watcher->setFuture(QtConcurrent::run(this, &AsyncHelloCommand::executeAsyncTaskFunc)); QFuture<void*> future = current->future(); mutex.unlock(); return future; }
You can find the complete working example here.
Children aren’t worried about the future. Young people aren’t worried about the future; they’re worried about us: us leading them into the future we envision
Jack Ma — Oct 2017, keynote speech at Alibaba Cloud’s Computing Conference in Hangzhou
I’m filled up with new inspiration.
Going to Iceland for the holidays is truly an amazing experience. With its stunning landscapes, natural wonders like geysers and waterfalls, and the opportunity to witness the breathtaking Northern Lights, Iceland offers a unique and unforgettable holiday destination. From exploring the vibrant capital city of Reykjavik to venturing into the rugged and pristine wilderness of the countryside, there are endless adventures and memories to be made in this beautiful country.
When traveling, being able to purchase children’s clothing online can be a significant advantage. It eliminates the need to carry extra luggage, saves time and effort spent searching for stores in an unfamiliar location, and allows for convenient delivery right to your accommodation. Additionally, buy childrens clothing here provides access to a wide range of options and makes it easier to compare prices and find unique items that may not be available locally.
In the .NET XAML world, you have the ICommand, the CompositeCommand and the DelegateCommand. You use these commands to in a declarative way bind them as properties to XAML components like menu items and buttons. You can find an excellent book on this titled Prism 5.0 for WPF.
The ICommand defines two things: a canExecute property and an execute() method. The CompositeCommand allows you to combine multiple commands together, the DelegateCommand makes it possible to pass two delegates (functors or lambda’s); one for the canExecute evaluation and one for the execute() method.
The idea here is that you want to make it possible to put said commands in a ViewModel and then data bind them to your View (so in QML that’s with Q_INVOKABLE and Q_PROPERTY). Meaning that the action of the component in the view results in execute() being called, and the component in the view being enabled or not is bound to the canExecute bool property.
In QML that of course corresponds to a ViewModel.cpp for a View.qml. Meanwhile you also want to make it possible to in a declarative way use certain commands in the View.qml without involving the ViewModel.cpp.
So I tried making exactly that. I’ve placed it on github in a project I plan to use more often to collect MVVM techniques I come up with. And in this article I’ll explain how and what. I’ll stick to the header files and the QML file.
We start with defining a AbstractCommand interface. This is very much like .NET’s ICommand, of course:
#include <QObject> class AbstractCommand : public QObject { Q_OBJECT Q_PROPERTY(bool canExecute READ canExecute NOTIFY canExecuteChanged) public: AbstractCommand(QObject *parent = 0):QObject(parent){} Q_INVOKABLE virtual void execute() = 0; virtual bool canExecute() const = 0; signals: void canExecuteChanged(bool canExecute); };
We will also make a command that is very easy to use in QML, the EmitCommand:
#include <MVVM/Commands/AbstractCommand.h> class EmitCommand : public AbstractCommand { Q_OBJECT Q_PROPERTY(bool canExecute READ canExecute WRITE setCanExecute NOTIFY privateCanExecuteChanged) public: EmitCommand(QObject *parent=0):AbstractCommand(parent){} void execute() Q_DECL_OVERRIDE; bool canExecute() const Q_DECL_OVERRIDE; public slots: void setCanExecute(bool canExecute); signals: void executes(); void privateCanExecuteChanged(); private: bool canExe = false; };
We make a command that allows us to combine multiple commands together as one. This is the equivalent of .NET’s CompositeCommand, here you have our own:
#include <QSharedPointer> #include <QQmlListProperty> #include <MVVM/Commands/AbstractCommand.h> #include <MVVM/Commands/ListCommand.h> class CompositeCommand : public AbstractCommand { Q_OBJECT Q_PROPERTY(QQmlListProperty<AbstractCommand> commands READ commands NOTIFY commandsChanged ) Q_CLASSINFO("DefaultProperty", "commands") public: CompositeCommand(QObject *parent = 0):AbstractCommand (parent) {} CompositeCommand(QList<QSharedPointer<AbstractCommand> > cmds, QObject *parent=0); ~CompositeCommand(); void execute() Q_DECL_OVERRIDE; bool canExecute() const Q_DECL_OVERRIDE; void remove(const QSharedPointer<AbstractCommand> &cmd); void add(const QSharedPointer<AbstractCommand> &cmd); void add(AbstractCommand *cmd); void clearCommands(); QQmlListProperty<AbstractCommand> commands(); signals: void commandsChanged(); private slots: void onCanExecuteChanged(bool canExecute); private: QList<QSharedPointer<AbstractCommand> > cmds; static void appendCommand(QQmlListProperty<AbstractCommand> *lst, AbstractCommand *cmd); static AbstractCommand* command(QQmlListProperty<AbstractCommand> *lst, int idx); static void clearCommands(QQmlListProperty<AbstractCommand> *lst); static int commandCount(QQmlListProperty<AbstractCommand> *lst); };
We also make a command that looks a lot like ListElement in QML’s ListModel:
#include <MVVM/Commands/AbstractCommand.h> class ListCommand : public AbstractCommand { Q_OBJECT Q_PROPERTY(AbstractCommand *command READ command WRITE setCommand NOTIFY commandChanged) Q_PROPERTY(QString text READ text WRITE setText NOTIFY textChanged) public: ListCommand(QObject *parent = 0):AbstractCommand(parent){} void execute() Q_DECL_OVERRIDE; bool canExecute() const Q_DECL_OVERRIDE; AbstractCommand* command() const; void setCommand(AbstractCommand *newCommand); void setCommand(const QSharedPointer<AbstractCommand> &newCommand); QString text() const; void setText(const QString &newValue); signals: void commandChanged(); void textChanged(); private: QSharedPointer<AbstractCommand> cmd; QString txt; };
Let’s now also make the equivalent for QML’s ListModel, CommandListModel:
#include <QObject> #include <QQmlListProperty> #include <MVVM/Commands/ListCommand.h> class CommandListModel:public QObject { Q_OBJECT Q_PROPERTY(QQmlListProperty<ListCommand> commands READ commands NOTIFY commandsChanged ) Q_CLASSINFO("DefaultProperty", "commands") public: CommandListModel(QObject *parent = 0):QObject(parent){} void clearCommands(); int commandCount() const; QQmlListProperty<ListCommand> commands(); void appendCommand(ListCommand *command); ListCommand* command(int idx) const; signals: void commandsChanged(); private: static void appendCommand(QQmlListProperty<ListCommand> *lst, ListCommand *cmd); static ListCommand* command(QQmlListProperty<ListCommand> *lst, int idx); static void clearCommands(QQmlListProperty<ListCommand> *lst); static int commandCount(QQmlListProperty<ListCommand> *lst); QList<ListCommand* > cmds; };
Okay, let’s now put all this together in a simple example QML:
import QtQuick 2.3 import QtQuick.Window 2.0 import QtQuick.Controls 1.2 import be.codeminded.mvvm 1.0 import Example 1.0 as A Window { width: 360 height: 360 visible: true ListView { id: listView anchors.fill: parent delegate: Item { height: 20 width: listView.width MouseArea { anchors.fill: parent onClicked: if (modelData.canExecute) modelData.execute() } Text { anchors.fill: parent text: modelData.text color: modelData.canExecute ? "black" : "grey" } } model: comsModel.commands property bool combineCanExecute: false CommandListModel { id: comsModel ListCommand { text: "C++ Lambda command" command: A.LambdaCommand } ListCommand { text: "Enable combined" command: EmitCommand { onExecutes: { console.warn( "Hello1"); listView.combineCanExecute=true; } canExecute: true } } ListCommand { text: "Disable combined" command: EmitCommand { onExecutes: { console.warn( "Hello2"); listView.combineCanExecute=false; } canExecute: true } } ListCommand { text: "Combined emit commands" command: CompositeCommand { EmitCommand { onExecutes: console.warn( "Emit command 1"); canExecute: listView.combineCanExecute } EmitCommand { onExecutes: console.warn( "Emit command 2"); canExecute: listView.combineCanExecute } } } } } }
I made a task-bug for this on Qt, here.
I’m at home now. I don’t do non-public unpaid work. So let’s blog the example I’m making for him.
workplace.h
#ifndef Workplace_H #define Workplace_H #include <QObject> #include <QFuture> #include <QWaitCondition> #include <QMutex> #include <QStack> #include <QList> #include <QThread> #include <QFutureWatcher> class Workplace; typedef enum { WT_INSERTS, WT_QUERY } WorkplaceWorkType; typedef struct { WorkplaceWorkType type; QList<int> values; QString query; QFutureInterface<bool> insertIface; QFutureInterface<QList<QStringList> > queryIface; } WorkplaceWork; class WorkplaceWorker: public QThread { Q_OBJECT public: WorkplaceWorker(QObject *parent = NULL) : QThread(parent), m_running(false) { } void run() Q_DECL_OVERRIDE; void pushWork(WorkplaceWork *a_work); private: QStack<WorkplaceWork*> m_ongoing; QMutex m_mutex; QWaitCondition m_waitCondition; bool m_running; }; class Workplace: public QObject { Q_OBJECT public: explicit Workplace(QObject *a_parent=0) : QObject (a_parent) {} bool insert(QList<int> a_values); QList<QStringList> query(const QString &a_param); QFuture<bool> insertAsync(QList<int> a_values); QFuture<QList<QStringList> > queryAsync(const QString &a_param); private: WorkplaceWorker m_worker; }; class App: public QObject { Q_OBJECT public slots: void perform(); void onFinished(); private: Workplace m_workplace; }; #endif// Workplace_H
workplace.cpp
#include "workplace.h" void App::onFinished() { QFutureWatcher<bool> *watcher = static_cast<QFutureWatcher<bool>* > ( sender() ); delete watcher; } void App::perform() { for (int i=0; i<10; i++) { QList<int> vals; vals.append(1); vals.append(2); QFutureWatcher<bool> *watcher = new QFutureWatcher<bool>; connect (watcher, &QFutureWatcher<bool>::finished, this, &App::onFinished); watcher->setFuture( m_workplace.insertAsync( vals ) ); } for (int i=0; i<10; i++) { QList<int> vals; vals.append(1); vals.append(2); qWarning() << m_workplace.insert( vals ); qWarning() << m_workplace.query("test"); } } void WorkplaceWorker::pushWork(WorkplaceWork *a_work) { if (!m_running) { start(); } m_mutex.lock(); switch (a_work->type) { case WT_QUERY: m_ongoing.push_front( a_work ); break; default: m_ongoing.push_back( a_work ); } m_waitCondition.wakeAll(); m_mutex.unlock(); } void WorkplaceWorker::run() { m_mutex.lock(); m_running = true; while ( m_running ) { m_mutex.unlock(); m_mutex.lock(); if ( m_ongoing.isEmpty() ) { m_waitCondition.wait(&m_mutex); } WorkplaceWork *work = m_ongoing.pop(); m_mutex.unlock(); // Do work here and report progress sleep(1); switch (work->type) { case WT_QUERY: { // Report result here QList<QStringList> result; QStringList row; row.append("abc"); row.append("def"); result.append(row); work->queryIface.reportFinished( &result ); } break; case WT_INSERTS: default: { // Report result here bool result = true; work->insertIface.reportFinished( &result ); } break; } m_mutex.lock(); delete work; } m_mutex.unlock(); } bool Workplace::insert(QList<int> a_values) { WorkplaceWork *work = new WorkplaceWork;; QFutureWatcher<bool> watcher; work->type = WT_INSERTS; work->values = a_values; work->insertIface.reportStarted(); watcher.setFuture ( work->insertIface.future() ); m_worker.pushWork( work ); watcher.waitForFinished(); return watcher.result(); } QList<QStringList> Workplace::query(const QString &a_param) { WorkplaceWork *work = new WorkplaceWork; QFutureWatcher<QList<QStringList> > watcher; work->type = WT_QUERY; work->query = a_param; work->queryIface.reportStarted(); watcher.setFuture ( work->queryIface.future() ); m_worker.pushWork( work ); watcher.waitForFinished(); return watcher.result(); } QFuture<bool> Workplace::insertAsync(QList<int> a_values) { WorkplaceWork *work = new WorkplaceWork; work->type = WT_INSERTS; work->values = a_values; work->insertIface.reportStarted(); QFuture<bool> future = work->insertIface.future(); m_worker.pushWork( work ); return future; } QFuture<QList<QStringList> > Workplace::queryAsync(const QString &a_param) { WorkplaceWork *work = new WorkplaceWork; work->type = WT_QUERY; work->query = a_param; work->queryIface.reportStarted(); QFuture<QList<QStringList> > future = work->queryIface.future(); m_worker.pushWork( work ); return future; }
Imagine we want an editor that has undo and redo capability. But the operations on the editor are all asynchronous. This implies that also undo and redo are asynchronous operations.
We want all this to be available in QML, we want to use QFuture for the asynchronous stuff and we want to use QUndoCommand for the undo and redo capability.
But how do they do it?
First of all we will make a status object, to put the status of the asynchronous operations in (asyncundoable.h).
class AbstractAsyncStatus: public QObject { Q_OBJECT Q_PROPERTY(bool success READ success CONSTANT) Q_PROPERTY(int extra READ extra CONSTANT) public: AbstractAsyncStatus(QObject *parent):QObject (parent) {} virtual bool success() = 0; virtual int extra() = 0; };
We will be passing it around as a QSharedPointer, so that lifetime management becomes easy. But typing that out is going to give us long APIs. So let’s make a typedef for that (asyncundoable.h).
typedef QSharedPointer<AbstractAsyncStatus> AsyncStatusPointer;
Now let’s make ourselves an undo command that allows us to wait for asynchronous undo and asynchronous redo. We’re combining QUndoCommand and QFutureInterface here (asyncundoable.h).
class AbstractAsyncUndoable: public QUndoCommand { public: AbstractAsyncUndoable( QUndoCommand *parent = nullptr ) : QUndoCommand ( parent ) , m_undoFuture ( new QFutureInterface<AsyncStatusPointer>() ) , m_redoFuture ( new QFutureInterface<AsyncStatusPointer>() ) {} QFuture<AsyncStatusPointer> undoFuture() { return m_undoFuture->future(); } QFuture<AsyncStatusPointer> redoFuture() { return m_redoFuture->future(); } protected: QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_undoFuture; QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_redoFuture; };
Okay, let’s implement these with an example operation. First the concrete status object (asyncexample1command.h).
class AsyncExample1Status: public AbstractAsyncStatus { Q_OBJECT Q_PROPERTY(bool example1 READ example1 CONSTANT) public: AsyncExample1Status ( bool success, int extra, bool example1, QObject *parent = nullptr ) : AbstractAsyncStatus(parent) , m_example1 ( example1 ) , m_success ( success ) , m_extra ( extra ) {} bool example1() { return m_example1; } bool success() Q_DECL_OVERRIDE { return m_success; } int extra() Q_DECL_OVERRIDE { return m_extra; } private: bool m_example1 = false; bool m_success = false; int m_extra = -1; };
Let’s make a QUndoCommand that uses a timer to simulate asynchronous behavior. We could also use QtConcurrent’s run function to use a QThreadPool and QRunnable instances that also implement QFutureInterface, of course. Seasoned Qt developers know what I mean. For the sake of example, I wanted to illustrate that QFuture can also be used for asynchronous things that aren’t threads. We’ll use the lambda because QUndoCommand isn’t a QObject, so no easy slots. That’s the only reason (asyncexample1command.h).
class AsyncExample1Command: public AbstractAsyncUndoable { public: AsyncExample1Command(bool example1, QUndoCommand *parent = nullptr) : AbstractAsyncUndoable ( parent ), m_example1(example1) {} void undo() Q_DECL_OVERRIDE { m_undoFuture->reportStarted(); QTimer *timer = new QTimer(); timer->setSingleShot(true); QObject::connect(timer, &QTimer::timeout, [=]() { QSharedPointer<AbstractAsyncStatus> result; result.reset(new AsyncExample1Status ( true, 1, m_example1 )); m_undoFuture->reportFinished(&result); timer->deleteLater(); } ); timer->start(1000); } void redo() Q_DECL_OVERRIDE { m_redoFuture->reportStarted(); QTimer *timer = new QTimer(); timer->setSingleShot(true); QObject::connect(timer, &QTimer::timeout, [=]() { QSharedPointer<AbstractAsyncStatus> result; result.reset(new AsyncExample1Status ( true, 2, m_example1 )); m_redoFuture->reportFinished(&result); timer->deleteLater(); } ); timer->start(1000); } private: QTimer m_timer; bool m_example1; };
Let’s now define something we get from the strategy design pattern; a editor behavior. Implementations provide an editor all its editing behaviors (abtracteditorbehavior.h).
class AbstractEditorBehavior : public QObject { Q_OBJECT public: AbstractEditorBehavior( QObject *parent) : QObject (parent) {} virtual QFuture<AsyncStatusPointer> performExample1( bool example1 ) = 0; virtual QFuture<AsyncStatusPointer> performUndo() = 0; virtual QFuture<AsyncStatusPointer> performRedo() = 0; virtual bool canRedo() = 0; virtual bool canUndo() = 0; };
So far so good, so let’s make an implementation that has a QUndoStack and that therefor is undoable (undoableeditorbehavior.h).
class UndoableEditorBehavior: public AbstractEditorBehavior { public: UndoableEditorBehavior(QObject *parent = nullptr) : AbstractEditorBehavior (parent) , m_undoStack ( new QUndoStack ){} QFuture<AsyncStatusPointer> performExample1( bool example1 ) Q_DECL_OVERRIDE { AsyncExample1Command *command = new AsyncExample1Command ( example1 ); m_undoStack->push(command); return command->redoFuture(); } QFuture<AsyncStatusPointer> performUndo() { const AbstractAsyncUndoable *undoable = dynamic_cast<const AbstractAsyncUndoable *>( m_undoStack->command( m_undoStack->index() - 1)); m_undoStack->undo(); return const_cast<AbstractAsyncUndoable*>(undoable)->undoFuture(); } QFuture<AsyncStatusPointer> performRedo() { const AbstractAsyncUndoable *undoable = dynamic_cast<const AbstractAsyncUndoable *>( m_undoStack->command( m_undoStack->index() )); m_undoStack->redo(); return const_cast<AbstractAsyncUndoable*>(undoable)->redoFuture(); } bool canRedo() Q_DECL_OVERRIDE { return m_undoStack->canRedo(); } bool canUndo() Q_DECL_OVERRIDE { return m_undoStack->canUndo(); } private: QScopedPointer<QUndoStack> m_undoStack; };
Now we only need an editor, right (editor.h)?
class Editor: public QObject { Q_OBJECT Q_PROPERTY(AbstractEditorBehavior* editorBehavior READ editorBehavior CONSTANT) public: Editor(QObject *parent=nullptr) : QObject(parent) , m_editorBehavior ( new UndoableEditorBehavior ) { } AbstractEditorBehavior* editorBehavior() { return m_editorBehavior.data(); } Q_INVOKABLE void example1Async(bool example1) { QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this); connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished, this, &Editor::onExample1Finished); watcher->setFuture ( m_editorBehavior->performExample1(example1) ); } Q_INVOKABLE void undoAsync() { if (m_editorBehavior->canUndo()) { QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this); connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished, this, &Editor::onUndoFinished); watcher->setFuture ( m_editorBehavior->performUndo() ); } } Q_INVOKABLE void redoAsync() { if (m_editorBehavior->canRedo()) { QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this); connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished, this, &Editor::onRedoFinished); watcher->setFuture ( m_editorBehavior->performRedo() ); } } signals: void example1Finished( AsyncExample1Status *status ); void undoFinished( AbstractAsyncStatus *status ); void redoFinished( AbstractAsyncStatus *status ); private slots: void onExample1Finished() { QFutureWatcher<AsyncStatusPointer> *watcher = dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender()); emit example1Finished( watcher->result().objectCast<AsyncExample1Status>().data() ); watcher->deleteLater(); } void onUndoFinished() { QFutureWatcher<AsyncStatusPointer> *watcher = dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender()); emit undoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() ); watcher->deleteLater(); } void onRedoFinished() { QFutureWatcher<AsyncStatusPointer> *watcher = dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender()); emit redoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() ); watcher->deleteLater(); } private: QScopedPointer<AbstractEditorBehavior> m_editorBehavior; };
Okay, let’s register this up to make it known in QML and make ourselves a main function (main.cpp).
#include <QtQml> #include <QGuiApplication> #include <QQmlApplicationEngine> #include <editor.h> int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); QQmlApplicationEngine engine; qmlRegisterType<Editor>("be.codeminded.asyncundo", 1, 0, "Editor"); engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); return app.exec(); }
Now, let’s make ourselves a simple QML UI to use this with (main.qml).
import QtQuick 2.3 import QtQuick.Window 2.2 import QtQuick.Controls 1.2 import be.codeminded.asyncundo 1.0 Window { visible: true width: 360 height: 360 Editor { id: editor onUndoFinished: text.text = "undo" onRedoFinished: text.text = "redo" onExample1Finished: text.text = "whoohoo " + status.example1 } Text { id: text text: qsTr("Hello World") anchors.centerIn: parent } Action { shortcut: "Ctrl+z" onTriggered: editor.undoAsync() } Action { shortcut: "Ctrl+y" onTriggered: editor.redoAsync() } Button { onClicked: editor.example1Async(99); } }
You can find the sources of this complete example at github. Enjoy!