Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataframe API improvements #7455

Open
jleibs opened this issue Sep 19, 2024 · 2 comments
Open

Dataframe API improvements #7455

jleibs opened this issue Sep 19, 2024 · 2 comments
Labels
enhancement New feature or request 🐍 Python API Python logging API ⛃ re_datastore affects the datastore itself

Comments

@jleibs
Copy link
Member

jleibs commented Sep 19, 2024

Notes from exploration:

  • EntityPathFilter expression is redundant with column-selection
  • Different types of column filtering is very useful for exploration
  • Select operators feel natural for incrementally filtering a dataset independently of the latest-at / range queries
    • Would be nice to just chain these in, or re-use a selection for multiple queries.
  • Implicit empty columns can be surprising. Some filter operations should probably return errors if the columns are not present.
  • Ultimately need a way to access deterministic/named columns in the context of the resulting Table
    • Allowing a user-set name in the ComponentSelector
  • Would be nice to group together timeline + range like we do some other places in rust
  • POV component is confusing
    • Really need a sampled multi-latest-at
    • Could be more clearly expressed as something like:
      let samples = dataset.logged_at(timeline, columns)
      dataframe = dataset.latest_at(samples, columns)
      
      This has the dual benefit that it allows users to provide their own samples, or to choose sample-times for multiple columns.

Proposals

Start with python refinement, and then back-propagate into rust if we like it.

Selections

The python Dataset object will internally track a set of columns that will be used for all queries along with an Arc<ChunkStore>.

Introduce new select_ variant APIs on the Dataset:

  • dataset.select_entities(expr: str) -> Dataset
    • NOTE: only mutates component-columns; no-op on control/time columns
  • dataset.select_components(components: Sequence[ComponentLike]) -> Dataset
    • NOTE: only mutates component-columns; no-op on control/time columns
  • dataset.select_columns(column_selectors: : Sequence[ColumnSelector]) -> Dataset

Each of these has the potential to strictly filter/mutate the active set of descriptors relative to the previous step. I.e. first selection is from the complete set, each incremental selection only selects from the remaining set.

LatestAtQuery and RangeQuery

Our TimeType ambiguity continues to torment us.

The most ergonomic is clearly an API that looks like:

  • LatestAtQuery(timeline: str, at: int | float)
  • RangeQuery(timeline: str, min: int | float, max: int | float)

The big challenge here is that sane-looking APIs are ambiguous without knowledge of the timeline.

Concretely:

  • LatestAtQuery(timeline, 2.0) needs to map to the TimeInt 2 if timeline is a Sequence, 2000000000 if timeline is Temporal and the user is thinking in seconds, and 2 if the timeline is temporal and the user is thinking in nanos.

TODO: Still not sure what the right answer is here.

If we follow precedent from TimeRangeBoundary this ends up looking something:

Choice A

latest_at = rr.LatestAt("frame", rr.TimeValue(seq=42))
latest_at = rr.LatestAt("time", rr.TimeValue(seconds=42.5))
range = rr.Range("frame, min=rr.TimeValue(seq=10), max=rr.TimeValue(seq=17))

Choice B, with some parameter-exploding could be simplified down to:

latest_at = rr.LatestAt("frame", seq=42)
latest_at = rr.LatestAt("time", seconds=42.5)
range = rr.Range("frame", min_seq=10, max_seq=17)

# Not sure if we let the users do stupid things
range = rr.Range("time", min_seconds=10.0, max_nanos=17000000000)

Choice C, diverging from what we do in TimeRangeBoundary:

latest_at = rr.dataframe.LatestAt.sequence("frame", 42)
latest_at = rr.dataframe.LatestAt.sequence("time", 42.5)
range = rr.dataframe.Range.sequence("frame", min=10, max=17)
range = rr.dataframe.Range.seconds("time", min=10.0, max=17)

Queries

Since the selection is now carried with the Dataset, you can now execute a query directly without providing columns.

  • dataset.latest_at_query(latest_at: LatestAt)
  • dataset.range_query(range: Range, pov: ComponentSelector)

This means you can write a query like:

opf = rr.dataframe.load_recording("datasets/open_photogrammetry_format.rrd")

range = rr.dataframe.Range.sequence("image", min=10, max=17)

# This part is still annoying
pov = rr.dataframe.ColumnSelector("world/cameras/rgb", rr.components.Blob)

df = opf.select_entities("world/cameras/**").range_query(range, pov)

Column Naming

Selectors/Descriptors will be given a name.

This name will default to one of:

  • Control:RowId
  • Timeline:<timeline_name>
  • <entity_path>:<component_short_name>

When specifying a component selector, users have the option to call .rename_as() to change the name of the component.

These names are also valid INPUT to a ColumnSelector.

For example:

image = rr.dataframe.ColumnSelector("/world/camera/rgb:Blob").rename_as("image")

df = opf.select_columns(image).latest_at_query(query)

df = opf.select_columns(image).range_query(query, pov=image)

POV

TODO: Needs more thought

@jleibs jleibs added enhancement New feature or request 👀 needs triage This issue needs to be triaged by the Rerun team 🐍 Python API Python logging API ⛃ re_datastore affects the datastore itself and removed 👀 needs triage This issue needs to be triaged by the Rerun team labels Sep 19, 2024
@nikolausWest
Copy link
Member

nikolausWest commented Sep 19, 2024

Awesome writeup!

The big challenge here is that sane-looking APIs are ambiguous without knowledge of the timeline.

One other possible direction to go here: What if you actually express the range or latest at query as an operation on the TimeColumnDescriptor? At that point you know what type it has in the store and can make it ergonomic. It's also symmetric in some way with how we handle other columns. It also avoids mistakes like doing rr.dataframe.Range.seconds("time", min=10.0, max=17) when "time" was in fact a poorly named sequence timeline.

(this is kind of a half baked idea so likely mega annoying in some obvious way)

@jleibs
Copy link
Member Author

jleibs commented Sep 19, 2024

At that point you know what type it has in the store and can make it ergonomic.

I think it half-solves the problem, but it still doesn't actually handle seconds vs nanoseconds on a time-typed column, which is still its own problem. I think to do that we would need to introduce some kind of "natural units" metadata on the column, but that's also awkward and error prone.

I'm hesitant to pull in something like https://pypi.org/project/custom-literals/, but that's of course the kind of behavior that would really be nice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request 🐍 Python API Python logging API ⛃ re_datastore affects the datastore itself
Projects
None yet
Development

No branches or pull requests

2 participants