-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Planetarium applications of sphviewer #22
base: master
Are you sure you want to change the base?
Conversation
Simplified the code for dealing with the remainder particles
Simplified the kview computation to get the inclusion of particles with a kernel that intersects the field of view, despite the particle being outside of it
Correct kview computation to remove the inclusion of particles with hsml[i] < 0 (equivalent to the old zpart > 0 condition)
Detangle tt between the kernel pixel size (the tt variable) with the kernel projected size (the new tt_f float variable), allowing kernels with sizes under the resolution to be computed properly
Adding "-ffast-math" compiler flag by default. This speeds up the code by more than a factor of 3.
…py-sphviewer, upgrading to version 1.2.1
Thanks a lot Adrien. Indeed, this would be a very nice feature to have implemented. I'll try to review it a soon as I find some time. |
Thanks! Indeed, there's been a few significant changes to rendermodule.c. Some of these were improvements to the core sphviewer implementation before the addition of the new projection.
Now for the projection implementation, rendermodule.c did also get some additions due to the way kernels in that projection are distorted unlike for the core projections that keep circular kernels. I'll be insisting on that aspect when sharing the maths for clarity, but a way to look at it for now is via the Tissot's indicatrix associated to that projection: the Tissot's indicatrix consists in characterizing local distortions when doing a sphere projection onto a flat surface. In practice, it looks at the projected shapes that result from the projection of identical circles distributed onto the surface of that sphere. Since kernels in 3D space are projected circles onto the sphere of viewing angles, the distortion the Tissot's indicatrix reveal will also affect these kernels after the projection. The core sphviewer could only compute the convolution of perfectly circular kernels, so this was not suitable for having an accurate projection of the kernels in this new scheme. Instead, I needed to develop a new special convolution to treat these distortions: these modifications are highlighted in this commit diff where I added a conditional structure (bloc between lines 109 and 175 of the new file), with the classic case (previously between lines 99 and 118) has been moved into the else case (now between lines 152 and 174), and the first case is the new projection case (between lines 109 and 149). |
Before sharing the maths I thought I should provide a quick shortcode that could be used as a proof of concept. However, while working on the more complex animated version of that shortcode (see down below), it revealed some 'segmentation fault (core dumped)' fatal errors. After looking into what was the cause of these, I found this was due to the interference between my new treatment to include intersecting kernels with centroids outside the field of view and the new low-resolution treatment. As the former would leave particles with xx and yy coordinates in render.render that were not within the (xsize,ysize) domain, some As for the proof of concept now, using QuickView similarly to this tutorial, we can just add the kwarg 'projection' to QuickView in the following way (with the camera target placed at a latitude of 30 degrees above the horizon):
Keep in mind that the center of the image is supposed to be the zenith when projected onto a planetarium dome, with the part facing the spectator being the upper half of the image (the lower half being behind him - note also that when using the method QuickView.imsave, this is ends up reversed). Obviously, seeing the projection in action isn't evident with only this, so here's a (slightly heavier) code where we rotate the camera around its target:
Note that this time I haven't put the 'origin' kwarg of imshow to 'lower' as it is done in QuickView, so the target is in the lower half of the view (it just comes down to my preference). I wasn't expecting to spend so much time debugging today, so I'll work on assembling the maths in a proper format to be shared tomorrow. |
Thanks! If you could send me the math by email, that would be great. I'll wait for the math, but from what I have seen so far, it seems to me that the implementation of the projection involves a few steps. (1) define the camera, (2) project the particles onto a sphere and (3) perform the rendering. I was thinking that instead of restricting the projection of the particles onto the sphere to your particular projection, it would be much more general to write a dedicated function that does it so that it can be used with any future projection. After all, the only thing that matters is the angular coordinates and the angular size of the "smoothed" sphere of each particle, no? Once this is calculated, each projection is just a mapping of this coordinates onto the plane. I did some quick maths, and it seems to me that it could be possible, in principle, to calculate all the particles coordinates in If this works, then we could create a very convenient framework for future extensions. |
Do you have some thoughts? I thought some more on this and tried a quick implementation of the logic outlined in my previous message. It seems to work for me. If you check the equirectangular branch that I just pushed to the repository, you will see that I am barely changing The idea is that particles get projected onto the sphere (radiu=1). At the moment this is not a separate function, but it should be, as it has nothing to do with the projection!). Then, the projection is applied to the angular coordinates (and to the smoothing lengths too!). For the particular projection that I have chosen, the smoothing lengths are the same in the x and y direction of the projection, so this could be done with only one value of the smoothing length for each particle. However, I updated I think this approach, or something close to this, will allow adding this feature while leaving the code clean and as modular as possible too, minimizing the interaction between the classes. What do you think? Do you think your projection is suitable for this or I am missing something? Thanks, |
by the way, I tested it using the example of the repository:
|
Sorry, I was working on writing down the maths and was planning on answering your comment after sending it to you since some aspects of it could be important in talking about the distortions. I'm gonna try to answer quickly anyway your latest comments:
The steps you described are not far from what I do, though it doesn't exactly project the particles onto a sphere, just kind-of: the idea of projecting onto a sphere is just an in-between step in getting to the final projection scheme. The azimuth equidistant projection is a scheme designed to project a sphere onto a flat surface, so when establishing the maths, I need to virtually pass by a projection of the space onto the sphere. But when it comes down to the implementation, I just treat the "said-sphere" as the set of all the possible directions that start from the camera position. I'm not sure if that makes sense? What I mean is that the projection is already "a mapping of this coordinates onto the plane" as you mentioned (at least when it comes to the scene.scene step, for render.render it's a bit more complex as I'll discuss it below).
The fact that I had to give the
While I pointed out this design choice as purely experimental, I do believe it can serve a purpose in the future (after all, this is equivalent to a "fisheye" lens) and fits well within the design of the zoom parameter. It also means that render.render cannot assume [0,xsize] and [0,ysize] would both correspond to [-90,90] degrees of angular coordinates and must know the Taking all this into account as for the double smoothing lengths solution you proposed, you probably noticed by now based on what I described that it would be incompatible as the kernels require more than just 2 smoothing lengths. However, I do agree that this sort of modularization would make a more convenient framework for future projections so I've been giving some thoughts to it. I think what it needs is not only the 2 smoothing lengths you proposed but perhaps also additional parameters that would encode the orientation for the longer axis, and the curvature applied to it. I believe something like that can be achieved by giving the kernel an angle corresponding to the said orientation and a length corresponding to the radius of curvature. While continuing to write down the maths, I'll give some more rigorous thinking to this implementation.
Although my earlier discussion answers partly this comment, I wanted to give some observations as to the equirectangular projection and the implementation in that branch. Notably, you've probably noticed from my explanation that the distortions in the equirectangular scheme are quite different from those in the azimuth equidistant scheme as the distorted kernels are indeed elliptical (actually sort of, more on that below) and easily aligned with the (x,y) axes of the field of view. This makes the equirectangular very straightforward to implement, unlike the azimuth equidistant scheme where one must consider the distorted kernels with different orientations and incurved ellipticities depending on the centroid position. With this in mind, the Additionally, the distorted kernels can become very different to ellipses in both schemes when reaching the nadir/south pole position for the azimuth equidistant projection, or both the nadir/south pole and zenith/north pole position for the equirectangular projection. In the latter case, any disc that contains the zenith position ends projected onto a curved band that spread all along the top part of the projected field. This is also something that I will need to consider more rigorously. Lastly, just an observation that is more a matter of personal appreciation, but I noticed you assessed the equirectangular case through a condition Well, that reply did not end up as quick as I was planning to do it, sorry for the long read! Back to formating the maths! |
For example, once you have the coordinates of each particle in the unity sphere, and you know how the smoothing lengths translate into delta_lat, and delta_lon (delta_lat and delta_lon the with of latitude and longitude, respectively), then you can use that information to calculate delta_x and delta_y in map's coordinates, for each particle located in the pixel x,y. This should always be possible to do unless I am missing something (again!). I'll try to work on the next level of complexity after the equirectangular projection. Hopefully, this would help to clarify what I have in mind.
|
I've developed a new implementation to consider more projection types, starting with its first new one: the Azimuthal equidistant projection. With this also comes some simplifications of the C module codes as well as a correction to the way rendermodule was summing up the projected kernels.
Until now, sphviewer was considering 2 types of projection: perspective (r=fixed value) and isometric (r = infinity). With this version, one can use an azimuthal equidistant projection of the scene with the camera target at the center of it (by default). This can be done by giving the string value 'fisheye' to a new parameter 'projection' in the kwargs of the Camera object given to Scene. However, this usage isn't the most valuable one.
The interest in the azimuthal equidistant projection resides in the fact that it is a suitable format for most planetarium projectors. In these, the central point of this projection generally corresponds to the zenith position of the dome. Because of this, placing the camera target in this default center location isn't a comfortable configuration for a planetarium spectator.
Instead, the 'projection' kwarg of Camera can be tweaked by appending 'fisheye' with the string equivalent of the wanted zenith angle in degrees where the target should be. For example a value 'fisheye45' would place that target at a latitude above the horizon of +45 degrees, while 'fisheye90' would simply place that target at the horizon (additionally, 'fisheye0' would leave the target at the zenith, 'fisheye180' would move the target all the way to the nadir, and any negative zenith angle - i.e. 'fisheye-***' - moves the target in the back of the spectator).
The implementation in itself required to modify:
For the latter point, I needed to add the 'projection' type as an argument of render.render. Additionally, I have made a design choice that made me also give it the 'extent' as another argument. This design choice is purely experimental and does not serve any purpose at the moment: although the Camera 'zoom' does not really have a utility when it comes to planetarium projection, I decided to use it as a way to narrow or broaden the maximum zenith angle shown in the azimuth equidistant projection. In that design, a zoom of 1 corresponds to the planetarium type 90 degree maximum zenith angle, while a zoom near 0 broadens to almost 180 degree max zenith angle, and a really large zoom narrows the max zenith angle near 0 degree. Because the mentioned kernel distortions need to know the centroids' position with respect to these maximum zenith angles, the 'extent' values are used as a reference.
Lastly, before implementing this new projection scheme, I had to prepare the code for making it adjustable to the addition of more projection schemes. For that I've made some simplifications and corrections to the codes:
That's it for now, I will be sharing the maths I used for my implementation of the azimuth equidistant projection. In the meantime, let me know if you have any thoughts about it. I also apologize for any misuse of the "pull request" feature of github: this is all very new to me!
Edit: added some links to specific commits for readability.