Incremental improvements to Open Simulator: Difference between revisions

From Open Metaverse Wiki
Line 121: Line 121:
See standard mesh parts kit above.
See standard mesh parts kit above.
====Custom parts ====
====Custom parts ====
=====NVidia=====
NVidia Omniverse linkage is a possibilty. Available now for collaborative editing in Blender.
NVidia Omniverse linkage is a possibilty. Available now for collaborative editing in Blender.
Connects different graphics programs.  Blender, Photoshop, and viewer could be connected.  
Connects different graphics programs.  Blender, Photoshop, and viewer could be connected.  
Needs further research.
Needs further research.
 
=====Unity=====
Unity Hub also allows for collaborative editing with Blender. There are some issues with portability of materials from Blender to Unity Hub but porting materials back from Unity Hub to Blender is easy.
 
=====Inworld Blender=====
Given Blender is open sourced code, merging Blender code into the OS viewer would allow for mesh creation inworld.
 
=====Blender Kit=====
The Blender Kit website, and similar sites, allow for integrated free use of mesh and material textures made available by other creators on various license types.


=== Full hierarchy of prims ===
=== Full hierarchy of prims ===

Revision as of 07:23, 28 April 2023

The views expressed below are not necessarily those of the Open Metaverse Research Group, but are the author's alone.

Can we get to Open Simulator 2.0 by incremental improvements to what we have. Maybe. Here's an approach. These are intended to be reasonable goals that can be worked on independently by individuals or small groups.

These are topics for discussion at this point.

Goals

Try to get speed and quality up to GTA V level. That's over a decade old technology. UE5 demo quality would be nice but is probably not feasible yet. Unreal Engine requires extensive scene preparation in Unreal Editor, which works on the whole scene. Metaverses don't work that way.

Networking

Safely extending UDP message types.

We're very constrained by the UDP format. It's hard to add new messages. Each end has to know exactly how long each message is. Unknown message types cannot just be ignored. Here's a way to fix that.

Add this new message type:

    // EncapsulatedMessage
    // simulator -> viewer
    // viewer -> simulator
    // reliable
    {
	    EncapsulatedMessage High 31 NotTrusted Unencoded
	    {
		    Message Single
		    {	Data Variable 2 	} // contains any message, variable length
	    }
    }

This contains any other message, with a length. So any unknown message can just be skipped, since it has a length. This simplifies extensions to the protocol.

Improved retransmission policy

Current message retranmission policy is to wait a fixed length of time for an ACK. Then retransmit the message. This is repeated a few times, then the networking system gives up. The fixed time is a bit too long. Suggest using twice the measured round trip time to start, then increasing that by a factor of 1.5x unti retransmission either succeeds or the connection is dropped as broken.

Long retransmission stalls tend to break things. I suspect this is one of the problems with region crossings.

Concurrent message processing and rendering

Current viewers work like this:

  • Read some incoming messages, not too many to prevent overload.
  • Acknowledge incoming messages that were read.
  • Process updates.
  • Draw frame.

If the frame rate drops, so does the message acknoledgement rate. This is why frame rate affects ping time.

Suggested approach (used in Sharpview [1]):

  • Render thread:
    • Draw frame, repeat.
  • Network thread:
    • Read all incoming messages.
    • Acknowledge all incoming messages.
    • Put on queue for update thread.
  • Update thread
    • Handle queued incoming messages.

Under overload, the incoming message queue could build up. In practice, this doesn't seem to happen often. That has to be monitored. The incoming queue could be split into two priority queues.

  • Object updates for objects near the camera
  • Everything else.

That way, lag mostly affects background objects, and network problems are separated from rendering delays.

IPv6 support

Each simulator needs an IP address, and IPv6 addresses are much easier and cheaper to get than IPv4 addresses. A very few message formats, the few that contain IPv4 addresses used in setting up connections, need to change.

Content

Goal: allow more in-viewer editing by average users.

Mesh unduplication

All mesh cubes are mostly the same

Standard mesh parts kit

  • Library of standard low-LI mesh parts, all open source.
  • These appear in build menus.
  • More can easily be added. They're just mesh objects with public UUIDs.
  • Lower LI for standard mesh parts, since there's only one copy of the mesh per region.
  • Rescalable, but not parameterized. Parameterization is hard. (See Archimatix Pro for Sinespace for a real parametric system.)

Everybody is going glTF

glTF as standard format for both materials and meshes. That's where the industry is going. So is SL.

Avatars and clothing

Many options here. Needs much discussion.

Suggested goal: automatic clothing fitting, like Roblox now has.

Lights

Need more than just projector and point lights. Support many short-range no-shadow directional lights (something glTF has) would be a big win. Those could be used freely to illuminate interiors. With a reasonable distance setting, they won't go through walls and roofs. Vulkan supports this.

In-viewer content editing

With standard parts

See standard mesh parts kit above.

Custom parts

NVidia

NVidia Omniverse linkage is a possibilty. Available now for collaborative editing in Blender. Connects different graphics programs. Blender, Photoshop, and viewer could be connected. Needs further research.

Unity

Unity Hub also allows for collaborative editing with Blender. There are some issues with portability of materials from Blender to Unity Hub but porting materials back from Unity Hub to Blender is easy.

Inworld Blender

Given Blender is open sourced code, merging Blender code into the OS viewer would allow for mesh creation inworld.

Blender Kit

The Blender Kit website, and similar sites, allow for integrated free use of mesh and material textures made available by other creators on various license types.

Full hierarchy of prims

Child prims can have their own children. Have a real hierarchy like everybody else. Required for glTF, which assumes a hierarchy. Surprisingly, not that hard. See SL issue BUG-232445. [2]

Immersion

Game/roleplay mode

An extension of no-fly mode. Turn on for game and roleplay areas.

  • First person camera.
  • No camming, but can turn head some to look around.
  • Limited click range.
  • Can't cam through walls and sit, so rooms with locked doors actually work.

Movement

More animation parameters

Speed, priority, etc.

SL type puppetry.

(More)

Improved unsit

Sit script gets one last chance in the unsit event to put the avatar back to where they were when they sat down, or someplace safe.

There are smart sits where car doors open and the avatar gets inside. Then unsit leaves them standing on top of the car. Ugly. No more standing on tables by accident.

Projectiles

Goal: more reliable bullet rezzing. More parameters on rezzing, so that bullets don't need scripts. See SL BUG-233084.[3] Have a standard bullet mesh that's always available, to avoid asset fetch delays.

Inter-simulator issues

Region crossings

They have to Just Work. Tough technical problem. Worth solving.

Edge object duplication

Stationary solid objects within a few meters of a region edge should have a copy of their physics model duplicated in the adjacent region, to prevent fallilng through bridges and walking through walls.

Distant scenery

Take orthographic pictures of each region once in a while from all 4 sides and above. Do this with at least noon and midnight lighting, and maybe sunset and sunrise. Simulators display those like a sim surround past the edge of all regions currently visible. See distant mountains. When in the air, you see the "above" picture for areas out of range. This is essentially how Google Earth does their big world.