It took a few days, but I've finally migrated my site to
Nikola. I used to have blog.mardy.it
served by Google's Blogger, the main sections of
www.mardy.it
generated with Jekyll, the image
gallery served by the old and glorious Gallery2,
plus a few leftovers from the old Drupal site.
Planet maemo
In the last few days I've been writing a simple website for Imaginario. I'm a terrible site designer, and I can't really say that I enjoy writing websites, but it's something that from time to time people might need to do. While the PhotoTeleport website is built with Jekyll, this time I decided to try some other static site generator, in order to figure out if Jekyll is indeed the best for me, or if there are better alternatives for my (rather basic) needs.

A Venetian gondoliere thought it a good idea to decorate his gondola with fascist symbols, yet he can't handle that others think it not a good "joke"
The post A Pathetic Human Being appeared first on René Seindal.

Kayaking in Venice is a unique experience. Venice Kayak offers guided kayak tours in the city of Venice and in the lagoon.
The post Venice Kayak appeared first on René Seindal.

I have put up a separate site with my street photography from Venice
The post Venice Street Photography appeared first on René Seindal.

The locals know Venice
The post Photo walks in Venice appeared first on René Seindal.

Brexit doesn't influence me directly, but being Danish living in Italy means my existence relies on freedom of movement. Brexit attacks that freedom.
The post Brexit from a distance appeared first on René Seindal.
In case you want to employ Blender for Computer Vision like e.g. for generating synthetic data, you will need to map the parameters of a calibrated camera to Blender as well as mapping the blender camera parameters to the ones of a calibrated camera.
Calibrated cameras typically base around the pinhole camera model which at its core is the camera matrix and the image size in pixels:
K = \begin{bmatrix}f_x & 0 & c_x \\ 0 & f_y& c_y \\ 0 & 0 & 1 \end{bmatrix}, (w, h)
But if we look at the Blender Camera, we find lots non-standard and duplicate parameters with random or without any units, like
- unitless shift_x
- duplicate angle, angle_x, angle_y, lens
Doing some research on their meaning and fixing various bugs in the proposed conversion formula, I could however come up with the following python code to do the conversion from blender to OpenCV
# get the relevant data
cam = bpy.data.objects["cameraName"].data
scene = bpy.context.scene
# assume image is not scaled
assert scene.render.resolution_percentage == 100
# assume angles describe the horizontal field of view
assert cam.sensor_fit != 'VERTICAL'
f_in_mm = cam.lens
sensor_width_in_mm = cam.sensor_width
w = scene.render.resolution_x
h = scene.render.resolution_y
pixel_aspect = scene.render.pixel_aspect_y / scene.render.pixel_aspect_x
f_x = f_in_mm / sensor_width_in_mm * w
f_y = f_x * pixel_aspect
# yes, shift_x is inverted. WTF blender?
c_x = w * (0.5 - cam.shift_x)
c_y = h * (0.5 + cam.shift_y)
K = [[f_x, 0, c_x],
[0, f_y, c_y],
[0, 0, 1]]
So to summarize the above code
- Note that f_x/ f_y encodes the pixel aspect ratio and not the image aspect ratio w/ h.
- Blender enforces identical sensor and image aspect ratio. Therefore we do not have to consider it explicitly. Non square pixels are instead handled via pixel_aspect_x/ pixel_aspect_y.
- We left out the skew factor s (non rectangular pixels) because neither OpenCV nor Blender support it.
- Blender allows us to scale the output, resulting in a different resolution, but this can be easily handled post-projection. So we explicitly do not handle that.
- Blender has the peculiarity of converting the focal length to either horizontal or vertical field of view (sensor_fit). Going the vertical branch is left as an exercise to the reader.
The reverse transform can now be derived trivially as
cam.shift_x = -(c_x / w - 0.5)
cam.shift_y = c_y / h - 0.5
cam.lens = f_x / w * sensor_width_in_mm
pixel_aspect = f_y / f_x
scene.render.pixel_aspect_x = 1.0
scene.render.pixel_aspect_y = pixel_aspect
I'm happy (and thankful) for having been invited to speak at the Linux Piter conference in Saint Petersburg on November 2nd. I'll be talking about the Ubports project, which is the community-driven continuation of the Ubuntu Touch effort, driven by Canonical until April 7th, when the project was cancelled.
Demo of Ubuntu convergence in actionThe conference talks will be in English and Russian, with simultaneous translation on the other language. The videos will appear a couple of weeks after the conference on the organization's YouTube channel, but in any case I will write a post here — unless, of course, something goes terribly wrong and I feel ashamed of my performance ;-). In order to minimize this risk, I won't be giving a live demo (at least, not before I finish talking on my slides), but I'll take a couple of Ubports devices with me and people are very welcome to come to me and check them out.
As far as I've understood, most of the audience will not be very familiar with Linux-based mobile devices, but I guess that could play into an advantage for me: no difficult questions, yay! ;-)
And I really hope that some member of the audience gets interested in the project and decides to become part of it. We'll see. :-)
I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.
Enough with the political posts!
One major grief for me when surfing on Android are ads. They not only increase page size and loading time, but also take away precious screen estate.