IdealOS Mark 4
It's been a whole lot of work to get to Mark 4 of Ideal OS. My real goal for this sprint was to have something that at least visually looks like a real operating system. What do you think?
Let’s cover the architecture and changes, shall we?
The central server is refactored to have separate manager classes for windows, connections, apps, and finally a separate class for routing messages around. All messages in the entire system go through the primary router which gives us a single point of control. This will come in handy in a minute.
The server is started with a set of config files which define which apps should run, which apps are available, what services to start, and the available fonts and themes. Because this can be configured unit testing is far easier. I can start the real server with a custom config, stuff in events, and see where it breaks.
The GUI toolkit is just a library. Technically any app could just look for input events and send out draw calls (many games do this), but in practice most apps will want to use the GUI toolkit.
The toolkit is a run of the mill tree of widgets with one tree per window. All input and drawing is recursive and, currently, very inefficient. It doesn’t try to track dirty rects or any other optimizations. That can be added later. In the interest of being merely slow and not completely glacial, the entire tree draws to a graphics context which collects all of the draw calls into a buffer and then submits them all at once. No app is required to do this, but it’s an easy optimization.
Current GUI components in the toolkit:
- button, checkbox, toggle button
- popup selector
- vbox, hbox layouts
- label & multi-line label
- textbox & multi-line textbox (super buggy though)
- icon button (for the dock)
- dropdown menus (for the menubar)
I moved away from the separate bitmap font and font metrics. Now the standalone font editor loads and saves a single JSON file with both metrics and bitmaps embedded in it. It looks like this:
Using the new editor I’ve redrawn the standard ASCII font and used some extra codes for icons and other useful symbols.
Currently there is just the one font, but the system supports multiple for future use.
In addition to handling the usual mouse and keyboard events, the GUI toolkit also supports themes. The server has a CSS like document (currently JSON) which represents the UI theme. For every standard GUI component it lists the colors of the foreground, background, border, etc., including hover and selected states. When a widget draws it requests the values for the current theme. Theme values are cached. If the user changes the current theme an event will be sent to all windows alerting them of the change. This simply clears the cache so that all widgets will fetch new values.
Currently there are very incomplete light and dark themes. In the future the system will become more advanced, but I’m happy to have the hooks in place now. Apps generally shouldn’t need to care about colors and patterns. They just draw using the current theme values, much like a web page with CSS. Eventually the theme file might become actual CSS. The themes reside entirely in the server. Apps never have to deal directly with the themes.; they just get values from the server.
The debug app has a button to toggle between the themes and you can see all apps magically refresh themselves.
The dark theme in action. Might be a tad ugly (and buggy!)
I added the first support for keybindings. Before this release the display manager would send raw keystrokes to the server, which would forward them on directly to the currently focused app. Now there is a keybindings file and an extra step in the server. If a key event matches a keybinding then it will swallow the key event and issue a new action event. This event is for semantically meaningful things that should be common across apps.
Currently there are only keybindings for up,down,left,right navigation, but over time we will extend it to more. The magic part about this is that apps are completely unaware of the process. They just listen for common action events and don’t have to care what particularly keybindings the user has set. At some point we could even support non-keyboard events. Maybe a mouse gesture could trigger the volume up event, or an assistive device could navigate using breath controls. The point is that apps don’t have to be written to specifically support these devices. It will all be a part of the keybinding system.
Common text that is language specific can now be stored in a translations file. Any widget like buttons can be configured with a
text_key instead of
text. This will request the translation from the server before rendering (cached for performance). As with themes, if the global language changes all apps receive an event, clear their caches, and repaint.
Currently there is just english and lolcat, with only a few translations. This will develop more in the future.
Some controls translated to lolcat.
The display manager is just another app which registers itself as a screen, then all graphics calls are routed to it. It also has direct access to the mouse and keyboard and uses them to send out events.
Though the display manager is also a window manager and can allocate windows and set which is focused, it is not the single source of truth. The server has an authoritative list of apps and windows. This means if the display manager crashes it can restart, reconnect to the server, and fetch all essential state. This makes the whole system more resilient. It also makes development easier because I can be actively developing and restarting the display manage constantly and the other apps are completely unaware. In the future we could even have multiple display managers active at once, with the apps being completely unaware.
There are now multiple classes of windows. Standard windows are created with the type
plain and the display manager will draw them with title bars, close buttons, and resize areas. Now there are a few other window types:
- popup: This is a child window of a plain window. It will be drawn without chrome and is forced to disappear when it loses focus.
- menubar, dock, sidebar: these are specialty types that are drawn without titlebars and other decorations.
- widget: this is a window which is not draw directly to the screen. Instead it is considered part of another app. Whenever a widget window draws its events will be routed to its owner window, which can then modify the draw calls before sending it on to the real display. The same happens for input events. Essentially it lets one app embed another one inside of it. The is currently used only for the sidebar widgets which are embedded inside of the sidebar app. However it is a powerful concept that I hope to use more throughout the system in the future
All apps are separate nodeJS processes that can only communicate through the single message bus. All input, output, drawing, and everything else goes through this single channel, which enforces security. Of course they are currently full node apps and could mess with the system through native APIs, but eventually they will be isolated in containers.
I’ve added a few more apps, so now the full list is:
- fractal: drawing a Mandelbrot set
- debug: toggles for theme and language
- guitest: shows all of the currently supported widgets
- jessecalc: a simple calculator
- sidebar widgets for weather, time, and music (w/ dummy data)
- texteditor: a simple text editor
- todolist: a list of three todos with functioning checkboxes
The sidebar, dock, and menubar are also separate apps, though with special capabilities because they are part of the system and not 3rd party apps. The menubar receives
set_main_menu events, the dock is allowed to launch apps, and the sidebar uses window embedding for its widgets.
Now that the system basically works for running simple apps with windows I think it’s time to make the server more robust and start building new display managers.
The only display server today is called Sidecar. (actually that’s not technically true. There is another headless display manager used for automated testing, but it doesn’t actually draw anything).
This is the SideCar tool in action
Sidecar is a web-based testing app that handles drawing and mouse interactions, plus some graphical overlays for determining which windows are part of which apps, where the cursor is, and monitoring logs of the whole system. In short it’s a debugging tool. While it’s great for that, it’s slow and requires a whole web browser to run.
Next I want to build a second display manager in Rust using hw accelerated graphics. This will accomplish two goals. First it will flush out bugs by having a second implementation of the protocol. Next it will get us closer to being able to run on a Raspberry Pi without an existing window system (X11 or Wayland).
I also want to rewrite the server backend to be cleaner and more type safe. I’m currently looking at using typescript to make the code safer while still letting me use the flexible Node JS ecosystem. At some point I might rewrite the central server in Rust, but I think that’s a ways off.
Oh, and I want to have a real audio subsystem. Much as it’s interesting to be able to route event sand drawing command around, I’d like to be able to stream audio between apps and various inputs and outputs. I’m currently looking at some nodes server side WebAudio implementations to see if it does what I need. Right now it can
The final big feature is to fully integrate a database. This has always been a core concept of IdealOS. I’ve prototyped db driven apps using React in a different repo. I think it’s time to bring this into the core and have some full end to end computation.
So for next time we’re looking at:
- New display manager in Rust
- Refactor central server with Typescript
- Research audio streams
- Integrate database
- more GUI components (list, scroll box, table)
- tons of bug fixes
- some apps that really read and write data to the database
try it out
If you want to try IdealOS yourself you’ll need to check out these two repos.
In the first repo do
npm install; npm run server which will start the central server and some apps.
In the second repo do
npm install; npm run start which will launch Sidecar in your browser and connect to the central server.
If you run across any problems tweet me.
Posted June 18th, 2021