# Mi Band 5 Review / Mi Band Evolution

2020-08-02 13:37 UTC  by  madman2k
0
0

Xiaomi has recently released the new Mi Band 5. Since I have owned the each band starting with the Mi Band 2, I think it is time to look back and see where the Mi Band has gone in the recent years.

Click to read 4542 more words
Categories: News

# Meepo Mini 2 vs. Archos SK8

2020-07-19 20:58 UTC  by  madman2k
0
0

Having never skateboarded before, I saw the Archos SK8 electric skateboard for about 80€ at a sale and thought why not give it a try. This got me into this whole electric skateboarding thing.

Click to read 3372 more words
Categories: News

# Lecture on Augmented Reality

2020-05-15 14:57 UTC  by  madman2k
0
0

Due to the current circumstances, I had to record the lectures on augmented reality, which I am typically holding live. This was much more work than anticipated..
On the other hand, this means that I can make them available via Youtube now.

So, if you ever wanted to learn about the basic algorithms behind Augmented Reality, now is your chance.

The lecture is structured in two parts

• Camera Calibration, and
• Object Pose Estimation.

You can click on the TOC links below to directly jump to a specific topic.

## Object Pose Estimation

Categories: Articles

# Are we dead yet?

2020-03-23 11:40 UTC  by  madman2k
0
0

I am quite frustrated with corona graphs in the news, since most reporters seem to have skipped math classes back then. For instance, just plotting the number of confirmed infections at the respective dates does not tell you anything due to the different time point of outbreak. So lets see whether I can do better:

With the site above, I tried to improve on a few things:

• the Charts are live: they update themselves each time you load the site.
• The curves are normalized by the time-point of outbreak so you can compare the course in different countries.
• You can select the countries that you want to compare.
• Different metrics are computed that allow comparing the corona countermeasures and impact across countries with different population size.
Categories: News

# Fast wire-frame rendering with OpenCV

2019-11-06 16:26 UTC  by  madman2k
0
0

Lets say you have mesh data in the typical format, triangulated, vertex buffer and index buffer. E. g. something like

>>> vertices

[[[ 46.27500153  19.2329998   48.5       ]]

[[  7.12050009  15.28199959  59.59049988]]

[[ 32.70849991  29.56100082  45.72949982]]

...,

>>> indices

[[1068 1646 1577]
[1057  908  938]
[ 420 1175  237]
..., 

Typically you would need to feed it into OpenGL to get an image out of it. However, there are occasions when setting up OpenGL would be too much hassle or when you deliberately want to render on the CPU.

In this case we can use the OpenCV to do the rendering in two function calls as:

img = np.full((720, 1280, 3), 64, dtype=np.uint8)

pts2d = cv2.projectPoints(vertices, rot, trans, K, None)[0].astype(int)
cv2.polylines(img, pts2d[indices], True, (255, 255, 255), 1, cv2.LINE_AA)

See the documentation of cv2.projectPoints for the meaning of the parameters.

Note how we only project each vertex once and only apply the mesh topology afterwards. Here, we just use the numpy advanced indexing as pts2d[indices] to perform the expansion.

This is pretty fast too. The code above only takes about 9ms on my machine.

In case you want filled polygons, this is pretty easy as well

for face in indices:
cv2.fillConvexPoly(img, pts2d[face], (64, 64, 192))

However, as we need to a python loop in this case and also have quite some overdraw, it is considerable slower at 20ms.

Of course you can also combine both to get an image like in the post title.

From here on you can continue to go crazy and compute face normals to do culling and shading.

Categories: Graphics

# Xiaomi AirDots Pro 2 / Air2 Review

2019-11-02 15:56 UTC  by  madman2k
0
0

So after having made fun of people for “wearing toothbrushes”, I finally came to buy such headphones for myself.

Click to read 2450 more words
Categories: Articles

# calibDB: easy camera calibration as a web-service

2019-08-09 16:13 UTC  by  madman2k
0
0

Camera calibration just got even easier now. The pose calibration algorithm mentioned here is available as web-service now.

This means that calibration is no longer restricted to a Linux PC – you can also calibrate cameras attached to Windows/ OSX and even mobile phones.
Furthermore you will not have to calibrate at all if your device is already known to the service.
The underlying algorithm ensures that the obtained calibrations are reliable and thus can be shared between devices of the same series.

Aggregating calibrations while providing on-the-fly calibrations for unknown devices form the calibDB web-service.

In the future we will make our REST API public so you can transparently retrieve calibrations for use with your computer vision algorithms.
This will make them accessible to a variety of devices, without you having to worry about the calibration data.

Categories: News

# Beyond the Raspberry Pi for Nextcloud hosting

2019-08-09 15:45 UTC  by  madman2k
0
0

When using Nextcloud it makes some sense to host it yourself at home to get the maximum benefit of having your own cloud.

Click to read 1242 more words
Categories: Articles

# Ubuntu on the Lenovo D330

2019-05-13 17:03 UTC  by  madman2k
0
0

The Lenovo D330 2-in-1 convertible (or netbook as we used to say) is a quite interesting device. It is based on Intels current low-power core platform, Gemini Lake (GLK), and thus offers great battery-life and a fan-less design.

Click to read 1636 more words
Categories: Articles

# From Blender to OpenCV Camera and back

2018-11-08 17:12 UTC  by  madman2k
0
0

In case you want to employ Blender for Computer Vision like e.g. for generating synthetic data, you will need to map the parameters of a calibrated camera to Blender as well as mapping the blender camera parameters to the ones of a calibrated camera.

Calibrated cameras typically base around the pinhole camera model which at its core is the camera matrix and the image size in pixels:

$$K = \begin{bmatrix}f_x & 0 & c_x \\ 0 & f_y& c_y \\ 0 & 0 & 1 \end{bmatrix}, (w, h)$$

But if we look at the Blender Camera, we find lots non-standard and duplicate parameters with random or without any units, like

• unitless shift_x
• duplicate angle, angle_x, angle_y, lens

Doing some research on their meaning and fixing various bugs in the proposed conversion formula, I could however come up with the following python code to do the conversion from blender to OpenCV

# get the relevant data
cam = bpy.data.objects["cameraName"].data
scene = bpy.context.scene
# assume image is not scaled
assert scene.render.resolution_percentage == 100
# assume angles describe the horizontal field of view
assert cam.sensor_fit != 'VERTICAL'

f_in_mm = cam.lens
sensor_width_in_mm = cam.sensor_width

w = scene.render.resolution_x
h = scene.render.resolution_y

pixel_aspect = scene.render.pixel_aspect_y / scene.render.pixel_aspect_x

f_x = f_in_mm / sensor_width_in_mm * w
f_y = f_x * pixel_aspect

# yes, shift_x is inverted. WTF blender?
c_x = w * (0.5 - cam.shift_x)
# and shift_y is still a percentage of width..
c_y = h * 0.5 + w * cam.shift_y

K = [[f_x, 0, c_x],
[0, f_y, c_y],
[0,   0,   1]]

So to summarize the above code

• Note that f_x/ f_y encodes the pixel aspect ratio and not the image aspect ratio w/ h.
• Blender enforces identical sensor and image aspect ratio. Therefore we do not have to consider it explicitly. Non square pixels are instead handled via pixel_aspect_x/ pixel_aspect_y.
• We left out the skew factor s (non rectangular pixels) because neither OpenCV nor Blender support it.
• Blender allows us to scale the output, resulting in a different resolution, but this can be easily handled post-projection. So we explicitly do not handle that.
• Blender has the peculiarity of converting the focal length to either horizontal or vertical field of view (sensor_fit). Going the vertical branch is left as an exercise to the reader.

The reverse transform can now be derived trivially as

cam.shift_x = -(c_x / w - 0.5)
cam.shift_y = (c_y - 0.5 * h) / w

cam.lens = f_x / w * sensor_width_in_mm

pixel_aspect = f_y / f_x
scene.render.pixel_aspect_x = 1.0
scene.render.pixel_aspect_y = pixel_aspect
Categories: News

# Migrating from owncloud 9.1 to nextcloud 11

2017-02-10 23:33 UTC  by  madman2k
0
0

First one should ask though: why? My main motivation was that many of the apps I use were easily available in the nextcloud store, while with owncloud I had to manually pull them from github.
Additionally some of the app authors migrated to nextcloud and did not provide further updates for owncloud.

Another reason is this:

the graphs above show the number of commits for owncloud and nextcloud. Owncloud has taken a very noticeable hit here after the fork – even though they deny it.

From the user perspective the lack of contribution is visible for instance in the admin interface where with nextcloud you get a nice log browser and system stats while with owncloud you do not. Furthermore the nextcloud android app handles Auto-Upload much better and generally seems more polished – I think one can expect nextcloud to advance faster in general.

Migrating

For migrating you can follow the excellent instructions of Jos Poortvliet.

In my case owncloud 9.1 was installed on Ubuntu in /var/www/owncloud and I put nextcloud 11 to /var/www/nextcloud. Then the following steps had to be applied:

1. put owncloud in maintenance mode
sudo -u www-data php occ maintenance:mode --on
2. copy over the config.php
cp /var/www/owncloud/config/config.php /var/www/nextcloud/config/
3. adapt the path in config.php
# from
'path' => '/var/www/owncloud/apps',
# to
'path' => '/var/www/nextcloud/apps',
4. adapt the path in crontab
sudo crontab -u www-data -e
5. adapt the paths in the apache config
6. run the upgrade script which takes care of the actual migration. Then disable the maintanance mode.
sudo -u www-data php occ upgrade
sudo -u www-data php occ maintenance:mode --off

and thats it.

Categories: News

# Learning Modern 3D Graphics Programming

2015-12-29 22:26 UTC  by  madman2k
0
0

one of the best resources to learn modern OpenGL and the one which helped me quite a lot is the Book at www.arcsynthesis.org/gltut/ – or lets better say was. Unfortunately the domain expired so the content is no longer reachable.

Luckily the Book was designed as an open source project and the code to generate the website is still available at Bitbucket. Unfortunately this repository does not seem to be actively maintained any more.

Therefore I set out to make the Book to be available again using Github Pages. You can find the results here:

https://paroj.github.io/gltut/

However I did not simply mirror the pages, but also improved it at several places. So what has changed so far?

• converted mathematical expressions from SVG to inline MathML. This does not only improve readability in browsers, but also fixes broken math symbols when generating the PDF.
• replace XSLTHL by highlight.js for better syntax highlighting
• added fork me on github badge to website to visualize that one can easily contribute
• enabled the Optimization Appendix. While it is not complete, it already provides some useful tips and maybe encourages contributions.
• updated the Documentation build to work on Linux
• added instructions how to Build the website/ PDF Docs

hopefully these changes will generate some momentum so this great Book gets extended again. As there were also non-cosmetical changes like the new Chapter I also tagged a 0.3.9 release.

I the process of the above work I found out that there is also a mirror of the original Book at http://alfonse.bitbucket.org/oldtut/. This one is however at the state of the 0.3.8 release, meaning it does not only misses the above changes but also some adjustment happened post 0.3.8 at bitbucket.

Categories: Graphics