OpenSpace is open source interactive data visualization software designed to visualize the entire known universe and portray our ongoing efforts to investigate the cosmos.

OpenSpace brings the latest techniques from data visualization research to the general public.  OpenSpace supports interactive presentation of dynamic data from observations, simulations, and space mission planning and operations. OpenSpace works on multiple operating systems, with an extensible architecture powering high resolution tiled displays and planetarium domes, and makes use of the latest graphic card technologies for rapid data throughput.   In addition, OpenSpace enables simultaneous connections across the globe, creating opportunity for shared experiences among audiences worldwide.

Direct-manipulation in OpenSpace

Interacting with touch is for many people a direct and intuitive way to control a computer interface. This is especially powerful if the user doesn’t have to map a set of controls to different interactions, but the manipulation becomes what is physically expected. Direct-manipulation aims to do just that, which in effect removes the User Interface.

The method developed and used in OpenSpace is a screen-space formulation. Each frame contact point touches the surface of a celestial body, and the geographical surface coordinates of that body are found and saved through ray tracing. The camera transform then aims to move and orient itself such that it minimizes the distance between the current frame’s contact points on the screen and the last frame’s surface coordinates projected to the screen space. This is done with a non-linear least squares minimization algorithm. In effect, a geographical location is locked to the user’s contact points, and the camera moves such that the location follows the fingers. The solver is unconstrained, which means that adding more contact points simply introduces more degrees of freedom (up to six) that are to be controlled.

  • One contact point gives the user control over two degrees of freedom, which are taken to be the orbit X and Y angles around the focus.
  • Two contact points gives the user additional control of the distance and rotation related to the focus point.
  • Three or more contact points give the user control over all six degrees of freedom, with the last two being panning angles in X and Y.

Field lines, model loading, and CEF

Oskar & Michael N.: During the last week we’ve made more options available within the field lines GUI, to prepare for the SUN-EARTH event on June 27th. For the live GPU-traced implementation users can now switch out or update seed point files during run time. We’ve also received new data that resembles what is going to be used for the event. These data are, however, not providing the expected output and we’re trying to locate the issue. For the solar browsing, a buffer has been created. The idea is to have one or several threads to continuously perform decoding jobs by predicting which images are going to be shown. The prediction is done by looking at the OpenSpace delta time multiplied with a minimum update limit — the images that map to these times are put into the buffer. If the buffer runs empty, the PBO update is simply skipped for that frame, so nothing stops the rendering thread. Another idea we have is to make the resolution time-dependent, this would basically mean that when the buffer runs empty, decoding is performed in a lower resolution until the buffer thread catches up again.

Jonathan: Since last week I’ve shown the interface to a number of people and managed to gathered some feedback. Parts of the interface have thus changed accordingly. For example, Zoom and Roll interaction no longer scales with multiple fingers, along with numerous other small improvements. I’ve also developed a unit test for the LM algorithm to be able to plot the path the solver took to converge to its solution. This was useful when debugging certain issues with the algorithm, such as it converging slowly or making missteps in the gradient descent. One long lasting bug has thus been fixed! I’ve also moved all parameters and changing variables for the algorithm into properties, so the user can change how the interface behaves through the GUI.

Rickard & Michael S: Last week we made some big changes to the structure of mesh generation. The texturing of the models are now done completely from shaders. This improves the size of the models and also their loading times. It also allows us to use textures from other cameras or different camera positions, which means we can get colored textures instead of only black and white. We’ve also looked into how we load the models: the models are now loaded as one subsite rather then each model for each subsite. This will also improve the loading time of the models. Our main goal right now is to improve the frame rate and loading of the models.

Klas: A CMake setup is finally in place for Chromium Embedded Framework (CEF). OpenSpace together with the CEF dynamic library, CEF process helper, and OpenSpace’s dynamic libraries, compiles and bundles the application on macOS. Work has begun on actually creating the embedded web browser within the WebGUI module. I’m expecting this to be something of a challenge, since CEF is expecting to be the main part of an application — not the other way around, as in our case. We do not want CEF be the star of our application — that should be OpenSpace itself.

Gene: I worked with Vikram on WMS server configuration, transferring information so I can take over with this when he leaves. I have also gone over his terrain bump-mapping work. I’m continuing to work on the MPCDI configuration in SGCT for the E&S Digitstar projection system.

A very productive week

Jonathan: Last week I improved the windows to TUIO output bridge that is used on the touch tables to send the touch input to OpenSpace. I also introduced a tracker for double tapping as well as a GUI mode for touch such that the on-screen GUI that already exists can be interacted with through touch, which gives the touch interface full control of the OpenSpace application.

Klas: I started to investigate how CEF (Chromium Embedded Framework) may be used in OpenSpace. The idea is to be able to use web techniques to create a richer user interface. Work has mostly been focused on getting a working CMake setup up and running. So far the build is failing, but nothing else is expected at this point. It might be tricky to get the multi process setup that CEF requires into OpenSpace, but it should be doable.

Oskar & Michael N: The solar browsing module can now handle projective texturing, which means that we can display time varying spacecraft imagery wrapped around the surface of a hemisphere. The FOV of the camera is also visualized simply by looking at the maximum size of the images at the position of the sun, and scaling linearly along the path to the spacecraft. This should preferably be done better though, by looking at the raw values of the metadata. The image plane is also now moved using a Gaussian function to gain more precision and smoother movement near the sun. Last week the CCMC staff asked for the possibility to color field lines depending on different quantities and magnitudes. This has been implemented by passing measured values along the lines to a transfer function. The user can, from within the GUI, change which variable to color by. Big parts of the code have been cleaned up, and it is being prepared to introduce the concept of dynamic seed points.

Rickard & Michael S.: We finally have a working prototype where the user can observe all rover surface models released by NASA. The next step is to improve mesh generation and texturing of the models. Another is to improve rendering of the rover traversal path to fit HiRISE as well as possible. The first dome test will be performed within the next few weeks.

Hard work paying off

Jonathan: Since last week I’ve focused on fixing bugs and refactoring the code inside the TouchInteraction class. Overall, the two interaction modes should be more intertwined, movement should be less sporadic, with fewer misinterpretations upon touch input (especially with taps), as well as some sensitivity scaling dependent on the touch screen size. I’ve also reached out to gather feedback on the current interface during the coming week so that I can design the interface to be more intuitive. There’s still some bugs with the LM algorithm — specifically with the gradient calculations — that are to be squashed!

Rickard & Michael S: Last week we mainly worked on placement of the rover terrain models, to correctly and automatically align them to the HiRiSE height map. Another task we finished was to dynamically and asynchronously load and cache models with corresponding textures. We implemented this together with the already existing chunk tree implemented in the globe browsing module, to be able to use the culling. We also wrote a small script to download binaries and textures for the model generation process. The next task is to be able to render the correct rover terrain model depending on the position of the camera.

Michael N & Oskar: Last week we implemented a wrapper around some of the features of OpenJPEG. The new format, together with asynchronous texture uploading, gives us the ability to stream from disk in 512×512 to 1024×1024, depending on the desired frame rate. A higher resolution is still desirable, but the bottleneck right now is the decoding speed of large images using this library. As for the field lines, after a meeting with CCMC staff responsible for the Sun-Earth event, compute shaders were put on hold to prioritize precomputed lines. Some refactoring and cleanup has also been done in the field line module.

OpenSpace NYC: Cassini & Messenger Buildathon

See Your Visualizations of NASA Missions on the Planetarium Dome! c-h-18-a28-hayden_planetarium_at_night

Calling all 3D Artists, Graphics Programmers & Software Developers, Astronomers & Astrophysicists: Would you like to see your own interactive space simulation running on the Hayden Planetarium dome?

Come join our OpenSpace “Buildathon” and be among the first to join the OpenSpace creator community!

The Buildathon will take place at the American Museum of Natural History in New York City on October 29, 2016. For more information visit https://openspacenyc.splashthat.com/

Osiris-REx Launch Event at AMNH

sgct_openspace_000002_small Today, NASA launched the OSIRIS-REx mission to obtain a sample of the asteroid Bennu and return it to Earth for further study. Scientists chose to sample Bennu, a primitive, carbon-rich near-Earth object, due to its potentially hazardous orbital path and informative composition.

On Monday, Sept 12, join Carter Emmart at the American Museum of Natural History for an OpenSpace-built Osiris-REx after-hours public program in which the Osiris-REx mission’s projected trajectory and potential sampling locations will be visualized on the AMNH Hayden Planetarium dome.

 

New Horizons’ Media Responses

Breakfast at Pluto Event at AMNH LeFrak Theatre

Breakfast at Pluto Event at AMNH LeFrak Theatre

Our event was a great success with much media attention throughout the world. If you have a news article covering our event, please let us know! First and foremost, the whole event took place in a Google Hangouts that is available online: Youtube Pre-event information: American Museum of Natural History As for the news articles: International: Engadget Space.com Gizmodo SpaceFlight Insider Swedish: Linköping University Press Release Norrköpings Tidningar Corren

Prerelease for New Horizons’ Closest Approach

In honor of the closest approach of the New Horizons spacecraft at Pluto, we prepared another pre-alpha version of OpenSpace in binary form. You can find all information about this in the Download section (or by following this link).

Using this pre-alpha version, we organize a global event connecting many planetariums the world over to celebrate this unique, once-in-a-lifetime experience.

Prerelease for Pluto-Palooza at the AMNH

To coincide with the Pluto Palooza at the AMNH, we are releasing a pre-alpha version of OpenSpace in binary form for Windows and Mac platforms. All information for this release is found here.

Space.com was present at the Pluto-Palooza in New York and some of the OpenSpace footage is shown next to the great explanations of the mission scientists:

IMERSA demo

This Saturday we will have an unofficial demo session at the IMERSA conference hosted in Boulder, CO. The demo will be given in the Fiske planetarium after the official events and will be hosted by Carter Emmart and driven by Miroslav Andel.