SoftspaceAR | Prototype05
Toward a real tool for real work.
SoftspaceAR Prototype05 is out!
Please give it a go!
In February of this year, I started to build and release these prototypes to explore an exciting new possibility space: productivity and creativity tools designed natively for AR headsets.
In the announcement post for this project, I wrote that:
Over the coming months, we will release 5–10 prototypes to explore and (in)validate promising ways to harness augmented reality as a medium for thought…
I expect the first few prototypes to be all over the map, the next few to start converging on core underlying principles, and the last ones to build more deliberately on what came before.
And that is indeed what is happening. This, the fifth prototype in the series, is the first one that feels like it’s building more on top of its predecessors than striking out on its own into new territory. It is heavily based on Prototype04, while also drawing on key lessons from Prototype03.
This doesn't mean that the experimentation is over. There are still aspects of the SoftspaceAR UX and product story that need prototyping and validation. But I've built out enough versions, and have tested them with enough people, to have a good feel for which ideas make sense, and which probably don’t.
In addition to a wide range of quality-of-life improvements and bug fixes which I will not bore you with, Prototype05 contains four major features that move SoftspaceAR toward being a real tool in which you can do real work.
These are: the web browser, images support, text item LODing, and multiple open topics.
Being able to access the internet is a core part of any contemporary knowledge workflow. Prototype05 implements a personal web browser to give users access to the rest of the web from within the headset. Given how many of the other tools that users rely on have web apps, this browser becomes a powerful and flexible window into existing workflows.
When browsing the web, if the user hovers over an HTML element that is identified as an image, a "download" icon appears to indicate that this image can be saved into the current workspace. Long click (or click and pull toward yourself) to save out the image.
Future versions of Softspace will enable website bookmarking, and the ability to snip portions of the browser window as images.
As you can read in our origin myth, Softspace was born during a research residency I had in an art and design studio. Immediately before that residency, I had been in architecture school. Today, architects and designers remain a core audience for this tool. Therefore, visual research, reasoning, and communication are critical use cases for Softspace.
These use cases, of course, require the ability to work with image files.
Prototype05 ports over the image processing and displaying modules from SoftspaceVR. Right now, the only way to bring image files into the workspace is by saving them out from the web browser, but we're already at working on a Dropbox file importer.
Side Note: Image LODing
One of the (many) interesting (and annoying) new problems that spatial computing presents is the question of how to technically display a large number of images in a 3D space.
Unlike on a conventional scrollable 2D document, where only a limited fraction of a document's contents are visible at once, a 3D workspace allows the user to potentially see many hundreds or thousands of images simultaneously.
At the same time, users are able to view images from very close up; to avoid blurriness, the system needs to be able to render images at very high resolution.
Given the limited VRAM of any computing device, not to speak of the even lower constraints of a mobile device like the Meta Quest 2, it would be impossible to display all these images at the highest possible level of quality all the time.
Therefore, Softspace implements a novel Level-of-Detail system that uses the apparent angular size of each image to assign it an LOD value, which then determines which version of the image texture to load and display. These assignments are updated several times a second, and images transition seamlessly from 256px preview textures to 2048px full-res textures, as needed.
One of the issues users experienced in previous prototypes was that text became too small to read when the workspace was scaled down or move far away from the user. Once the text of the workspace became illegible, the workspace become nonsensical—a pretty visualization of something without meaning.
Prototype05 takes a first pass at rendering text bodies at different Levels-of-Detail, depending on text item size and distance relative to the user.
Just before a text block becomes too small to read, the text body is replaced with larger font. To prevent overflow of the text item bounds, this larger text is truncated. I find that being able to read the first few sentences or words of text items is enough to give me a much better sense of what different areas of the workspace are about.
Extra points: a more sophisticated version of this system could display an auto-generated summary of the text (e.g. using AI) instead of just using the first sentences.
Multiple Open Topics
In Prototype04, only one Topic item could be expanded at a time. If you expanded a Topic while another was open, the first would automatically be collapsed. The intention behind this design was to permit intuitive transclusion of content items (text, images) across multiple Topics.
However, a common point of feedback was that Prototype04 felt too messy and chaotic with all the Topics and text items floating everywhere. People who tried both Prototypes 03 and 04 tended to prefer 03's single fixed layout, especially while creating and editing content.
Prototype05 partially solves this problem by allowing multiple topics to be expanded and positioned next to each other. (If two topics share any content items, then they will still be mutually exclusively expandable).
I'm also working on a way for future versions of Softspace to have a single, global ordinospatial layout, like Prototype03 has.
An incomplete list of things I want to improve in upcoming prototypes:
Arm fatigue. As originally pointed out by Andy Matuschak, and subsequently raised by many other users, the point-of-view based ray cursor is tiresome to use in a way that a mouse, or even the Oculus OS laser pointer, is not. I have been sketching out improvements on this front.
Cursor icons. It’s about time to swap out the red debugging sphere that represents where the cursor is pointing in space with a proper set of icons.
Topic ergonomics. Topic items currently maintain a vertical orientation, which can cause strain when you spend a lot of time writing text or moving images around on one. I’ve investigating more ergonomic orientations when a topic is actively being edited.
Click-to-focus. Often, users want to get a closer look at a particular item without having to “swim” themselves over to it. I’m thinking about implementing a double-click-to-focus interaction that makes it much faster and easier to inspect things.
More UI feedback. SoftspaceVR relied on hand controllers, which for all their downsides, did provide excellent haptic feedback when the user was interacting with things. In a tracked-hand interaction model, we need much more visual and auditory feedback to let the user know what they’re doing in the UI, or what will happen next.
As always, thank you so much for taking the time to read this! Your comments, ideas, and concerns are always welcome. You can help us on this journey by:
Getting and testing this prototype 🧑🏽🔬
Following and retweeting us on Twitter 🐦
Joining the Discord channel 👯♀️
Until next time!