from blender_tissue_cartography import io as tcio
from blender_tissue_cartography import mesh as tcmesh
from blender_tissue_cartography import remesh as tcremesh
from blender_tissue_cartography import interpolation as tcinterp
from importlib import reload
5. Tissue cartography with “seams”
In tutorial 4, we used tissue cartography on a very simple surface - a mildly curved plane. Here, we’ll look at how we can deal with a more complicated surface - that of the gastrulating Drosophila embryo. This is an ellipsoidal surface and will therefore require seams to map it to the plane.
import numpy as np
from skimage import transform
import os
import matplotlib.pyplot as plt
# only run this if you have installed pymeshlab
import pymeshlab
from blender_tissue_cartography import remesh_pymeshlab as tcremesh_pymeshlab
Warning:
Unable to load the following plugins:
libio_e57.so: libio_e57.so does not seem to be a Qt Plugin.
Cannot load library /home/nikolas/Programs/miniconda3/envs/blender-tissue-cartography/lib/python3.11/site-packages/pymeshlab/lib/plugins/libio_e57.so: (/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined symbol: ffi_type_pointer, version LIBFFI_BASE_7.0)
Important conventions
- Image axis 0 is always the channel. All other axes are not permuted
- Mesh coordinates are always saved in microns.
- The UV map (the map of our surface mesh to a cartographic plane) always maps into the unit square, \(u\in[0,1], \; v\in[0,1]\). All of our projections will be square images (with transparent regions for parts of the UV square not covered by the unwrapped mesh)
Load and subsample data for segmentation
Data description in toto light sheet recording of gastrulating Drosophila embryo with membrane marker CAAX-mCherry.
We begin by creating a directory for our project where we’ll save all related files. Let’s load the dataset. We then enter the relevant metadata - the filename, resolution in microns, and how much we want to subsample for segmentation purposes.
= {'filename': 'drosophila_example/Drosophila_CAAX-mCherry'} metadata_dict
= tcio.adjust_axis_order(tcio.imread(f"{metadata_dict['filename']}.tif"))
image print("image shape:", image.shape)
image shape: (1, 190, 509, 188)
Resolution info and subsampling
We now enter the resolution in microns/pixel for each axis. You can typically get this info from the .tif
metadata. Here we use lightsheet data which has isotropic resolution. We then subsample the image for rapid segmentation.
'resolution_in_microns'] = (1.05, 1.05, 1.05)
metadata_dict['subsampling_factors'] = (1/2, 1/2, 1/2) metadata_dict[
We now downscale the image. For very large volumetric images, you should use the option use_block_averaging_if_possible=True
and choose subsampling factors which (a) are inverse integers like 1/2, 1/3, and (b) if possible, divide the number of pixels along each axis (so for instance 1/2 would be good for an axis with 1000 pixels, but not ideal for 1001 pixels).
= tcio.subsample_image(image, metadata_dict['subsampling_factors'],
subsampled_image =False)
use_block_averaging_if_possibleprint("subsampled image shape:", subsampled_image.shape)
subsampled image shape: (1, 95, 254, 94)
Create 3d segmentation
Now create a 3d segmentation, in this case using ilatik. We use ilastik binary pixel classification. We could post-process the ilastik output here, for example using morphsnakes. We then load the segmentation back into the jupyter notebook.
The bright dots outside the embryo are fluorescent beads necessary for sample registration in lightsheet microscopy. You can ignore them.
Attention: when importing the .h5
into ilastik, make sure the dimension order is correct! In this case, CZYX
for both export and import.
# we now save the subsampled image a .h5 file for input into ilastik for segmentation
f"{metadata_dict['filename']}_subsampled.h5", subsampled_image) tcio.write_h5(
# after creating an ilastik project, training the model, and exporting the probabilities, we load the segmentation
= tcio.read_h5(f"{metadata_dict['filename']}_subsampled-image_Probabilities.h5")
segmentation = segmentation[0] # select the first channel of the segmentation - it's the probablity a pixel
segmentation # is part of the sample
print("segmentation shape:", segmentation.shape)
segmentation shape: (95, 254, 94)
# look at the segmentation in a cross section
50], vmin=0, vmax=1) plt.imshow(segmentation[:,:,
Meshing
We convert the segmentation into a triangular mesh using the marching cubes method and save the mesh. We save all meshes as wavefront .obj
files (see wikipedia).
Important convention For sanity’s sake, we will always store all mesh coordinates in microns. This means rescaling appropriately after calculating the mesh from the 3d segmentation.
# now we create a 3d mesh of using the marching cubes method
= tcremesh.marching_cubes(segmentation, isovalue=0.5, sigma_smoothing=3)
vertices, faces
# EXTREMELY IMPORTANT - we now rescale the vertex coordinates so that they are in microns.
= vertices * (np.array(metadata_dict['resolution_in_microns'])
vertices_in_microns /np.array(metadata_dict['subsampling_factors']))
= tcmesh.ObjMesh(vertices_in_microns, faces)
mesh = "Drosophila_CAAX-mCherry_mesh_marching_cubes"
mesh.name f"{metadata_dict['filename']}_mesh_marching_cubes.obj") mesh.write_obj(
Optional - improve mesh quality using MeshLab
We can remesh the output of the marching cubes algorithm to obtain an improved mesh, i.e. with more uniform triangle shapes. In this example, we first remesh to make the mesh more uniform. You can also try this out in the MeshLab GUI and export your workflow as a Python script. Be careful not to move the mesh or it will mess up the correspondence with the pixel coordinates!
= tcremesh_pymeshlab.remesh_pymeshlab(mesh)
mesh_remeshed f"{metadata_dict['filename']}_mesh_remeshed.obj") mesh_remeshed.write_obj(
To check all went well, let’s overlay the mesh coordinates over a cross section of the image.
= tcmesh.ObjMesh.read_obj(f"{metadata_dict['filename']}_mesh_marching_cubes.obj")
mesh = tcio.adjust_axis_order(tcio.imread(f"{metadata_dict['filename']}.tif")) image
= tcinterp.get_cross_section_vertices_normals(1, 100,
slice_image, slice_vertices "resolution_in_microns"],
image, mesh, metadata_dict[=False, width=2) get_normals
= plt.figure(figsize=(5,5))
fig *slice_vertices.T, s=5, c="tab:red")
plt.scatter(0], origin="lower", vmax=10000) plt.imshow(slice_image[
Processing in blender
We now switch to blender and create a new empty project, which we will call Drosophila_CAAX-mCherry.blend"
. We import the mesh just generated (File->Import).
I recommend using the “object” tab (orange square on the right toolbar) to lock mesh position and rotation so we don’t accidentally move it.
Mesh editing
The mesh we have created unfortunately contains a few holes, where the segmentation surface touched the image boundary:
Useful shortcuts: “A” selects all and “.” on the Numpad centers the view on the currently selected element.
Let’s fix them using blender’s mesh editing tools. Go to the “Modeling” workspace on the top toolbar. Press “2” to get to edge mode, select the loop around the whole and then use “Faces->fill” to fill the hole.
We can also use the “Mesh->fill holes tool”. Afterward, we can smooth out our mesh in the “Sculpting” tab, for instance using the “Smooth” tool. This is the result:
Not super beautiful but this is for illustration purposes only.
UV mapping
Let’s try to move forward and get a UV map of the mesh. To do so, we go to the “UV Editing” tab on the top toolbar, and press “3” then “A” to select all faces (“1” selects vertices, “2” edges, and “3” faces). Click “UV->sphere projection” on the top panel to get an automated, default UV map:
Defining seams
Since our surface is not topologically equivalent to the cartographic UV square, we need to define seams along which to cut. Blender makes some choices automatically, but it might not be what we want.
Press “2” to go into edge selection mode, select a loop along which you want to cut, right-click, and select “Mark seam”. We’ll do this on the anterior and posterior poles of the embryo, as well as along the dorsal midline.
For the dorsal seam, it’s easiest to specify it by selecting two vertices and finding the shortest path between them. Press “1” for vertex mode, “right click” on the initial vertex, hold “ctrl” and “left click” on the final vertex, then press “2” to switch back to edge mode. Our seams now look like this:
Now we can click “UV-> unwrap” and get an unwrapping with our desired seams!
By default, blender will make a projection that tries to preserve areas. The two little caps in the bottom are our poles. To move stuff around in the UV map, select the objects of interest, press “g” to move, and “click” to confirm the new position. Try it by moving the pole caps a little away from one another.
There are various tools for editing UV maps
Blender export
Once we are happy with our UV map, we click on export and save it as .obj
with UV and normals.
A few things are important: - Always include UV and normals. Otherwise, cartographic projection will fail! - Only export selected items! With a more complicated blender project, you might end up exporting multiple meshes. This will trip up the cartographic projection algorithm. - Savie as a triangulated mesh, since some of the other tools/python geometry libraries we will use for more advanced examples work best/only with triangular meshes. This option will subdivide any quads/polygons your mesh may have.
The new mesh file f"{metadata_dict['filename']}_mesh_uv.obj"
now contains vertex normals and UV coordinates.
Cartographic projection/interpolation onto UV grid
We now read in the new .obj
file to interpolate the image data onto the 3d mesh. The UV grid always covers the unit square \([0,1]^2\).
We additionally specify a series of offsets in the normal direction to make a multilayer projection. We also get the 3d coordinates and the vertex normals interpolated onto the UV grid from this step.
= np.linspace(-2, 2, 5) # in microns
normal_offsets "normal_offsets"] = normal_offsets # add the info to the metadata
metadata_dict[= 512 uv_grid_steps
= tcinterp.create_cartographic_projections(
projected_data, projected_coordinates, projected_normals =f"{metadata_dict['filename']}.tif",
image=f"{metadata_dict['filename']}_mesh_uv.obj",
mesh=metadata_dict["resolution_in_microns"],
resolution=normal_offsets,
normal_offsets=uv_grid_steps) uv_grid_steps
CPU times: user 982 ms, sys: 693 ms, total: 1.67 s
Wall time: 1.69 s
# save images for visualization in blender
= f"{os.getcwd()}/{metadata_dict['filename']}_textures"
texture_path =(0.01, 0.99)) tcio.save_stack_for_blender(projected_data, texture_path, normalization
# save images as .tif stack for analysis
f"{metadata_dict['filename']}_projected.tif", projected_data, z_axis=1)
tcio.save_for_imageJ(f"{metadata_dict['filename']}_3d_coordinates.tif", projected_coordinates)
tcio.save_for_imageJ(f"{metadata_dict['filename']}_normals.tif", projected_normals) tcio.save_for_imageJ(
# save the metadata to a human and computer readable file
f"{metadata_dict['filename']}_metadata.json", metadata_dict) tcio.save_dict_to_json(
# show the projected data
= plt.figure(figsize=(4,4))
fig 0, 1]) plt.imshow(projected_data[
# show the projected 3d coordinates
= plt.figure(figsize=(4,4))
fig 1]) plt.imshow(projected_coordinates[...,
Visualization
As in the previous example, we now use blender’s shading tools to show the projections as textures on our 3d mesh. But let’s go a step further and add a shader (green box, “Principled BSDF”) that gives a better appreciation of depth/3D-ness.