Age | Commit message (Collapse) | Author |
|
Merging two different dives by interleaving dive computer data got
broken by the multi-dive-computer code in commit b6c9301e5847 ("Move
more dive computer filled data to the divecomputer structure") which
added a lot more entries to the dive computer data structure, and then
copied the resulting structure incorrectly.
Make sure we don't copy the events and samples allocations when we copy
all the other fields of the divecomputer.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We had this special logic to not show the end of a dive when a dive
computer shows a series of very shallow samples (basically snorkeling
back to shore after the dive ended). However, that logic ended up being
global per dive, which is very annoying when you have two or more dive
computers, and it decides to cut off the second one because the first
one surfaces.
So get rid of this per-dive state, and just use the plot-info 'maxtime'
field for this (we never used the 'start' case anyway). That way we
will properly cut off boring surface entries only when they are past the
end of the interesting entries of *all* dive computers, and we won't be
cutting things short.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This no longer abuses the dive merging code (which would leave stray
"dives" behind if a gps fix couldn't be merged with any of the dives) and
instead parses the gps fixes into a second table and then walks that table
and tries to find matching dives.
The code tries to be reasonably smart about this. If we have
auto-generated GPS fixes at regular intervals, we look for a fix that is
during a dive (that's likely when the boat where the phone is staying dry
is more or less above the diver having fun). And if we have named entries
(so the user typed in a location name) we try to match them in order to
the dives that happened "that day" (where "that day" is about 6h before
and after the timestamp of the gps fix).
This commit also renames dive_has_location() to dive_has_gps_location() as
the difference between if(!dive->location) and if(dives_has_location) is a
bit too subtle...
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Mostly coding style and whitespace changes plus making lots of functions
static that have no need to be extern. This also helped find a bit of code
that is actually no longer used.
This should have absolutely no functional impact - all changes should be
purely cosmetic. But it removes a bunch of lines of code and makes the
rest easier to read.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
The statistics page only used each dive's "watertemp" attribute,
regardless of actual higher/lower temperatures in the samples. By
finding the actual max/min temperatures, the statistics page utilize
more "real" data, and look better even on single dives.
Signed-off-by: Henrik Brautaset Aronsen <subsurface@henrik.synth.no>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Update maxdepth / duration that have moved into the divecomputer
structure.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This moves the fields 'duration', 'surfacetime', 'maxdepth',
'meandepth', 'airtemp', 'watertemp', 'salinity' and 'surface_pressure'
to the per-divecomputer data structure. They are filled in by the dive
computer, and normally not edited.
NOTE! All actual *use* of this data was then changed from dive->field to
dive->dc.field programmatically with a shell-script and sed, and the
result then edited for details. So while the XML save and restore code
has been updated, all the displaying etc will currently always just show
the first dive computer entry.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Dive locations marked (and named) via the companion app are downloaded
from the webservice, parsed and merged with the existing dives.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
It used to be that when you checked the "Prefer downloaded" checkmark,
we'd throw away *any* old dive computer data. That was good, because it
allowed us to start from a clean slate when you had some old subsurface
data with questionable dive computer data.
However, it was a bit extreme, and it's really not what you want if you
already have (good) dive computer data from other dive computers.
So this modifies the logic a bit. Instead of throwing away all old dive
computer data, the "Prefer downloaded" checkmark now means:
- the newly downloaded data becomes the "primary" dive computer data
(ie the first in the list)
- if there was old dive computer data that *could* have been from this
new dive computer (ie it didn't have model information, or it had a
matching model but no device ID data), we throw that away
- but any existing dive computer data from other dive computers is left.
This seems to be much closer to what we really would want for a new
"preferred" download.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This patch centralizes the definition for surface pressure, oxygen in
air, (re)defines all such values as plain integers and adapts calculations.
It eliminates 11 (!) occurrences of definitions for surface pressure and
also a few for oxygen in air.
It also rewrites the calculation for EAD, END and EADD using the new
definitons, harmonizing it for OC and CC and fixes a bug for EADD OC
calculation.
And finally it removes the unneeded variable entry_ead in gtk-gui.c.
Jan
Signed-off-by: Jan Schubert <Jan.Schubert@GMX.li>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
.. and rename the badly named 'output_units/input_units' variables.
We used to have this confusing thing where we had two different units
(input vs output) that *look* like they are mirror images, but in fact
"output_units" was the user units, and "input_units" are the XML parsing
units.
So this renames them to be clearer. "output_units" is now just "units"
(it's the units a user would ever see), and "input_units" is now
"xml_parsing_units" and set by the XML file parsers to reflect the units
of the parsed file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This clarifies/changes the meaning of our "cylinderindex" entry in our
samples. It has been rather confused, because different dive computers
have done things differently, and the naming really hasn't helped.
There are two totally different - and independent - cylinder "indexes":
- the pressure sensor index, which indicates which cylinder the sensor
data is from.
- the "active cylinder" index, which indicates which cylinder we actually
breathe from.
These two values really are totally independent, and have nothing
what-so-ever to do with each other. The sensor index may well be fixed:
many dive computers only support a single pressure sensor (whether
wireless or wired), and the sensor index is thus always zero.
Other dive computers may support multiple pressure sensors, and the gas
switch event may - or may not - indicate that the sensor changed too. A
dive computer might give the sensor data for *all* cylinders it can read,
regardless of which one is the one we're actively breathing. In fact, some
dive computers might give sensor data for not just *your* cylinder, but
your buddies.
This patch renames "cylinderindex" in the samples as "sensor", making it
quite clear that it's about which sensor index the pressure data in the
sample is about.
The way we figure out which is the currently active gas is with an
explicit has change event. If a computer (like the Uemis Zurich) joins the
two concepts together, then a sensor change should also create a gas
switch event. This patch also changes the Uemis importer to do that.
Finally, it should be noted that the plot info works totally separately
from the sample data, and is about what we actually *display*, not about
the sample pressures etc. In the plot info, the "cylinderindex" does in
fact mean the currently active cylinder, and while it is initially set to
match the sensor information from the samples, we then walk the gas change
events and fix it up - and if the active cylinder differs from the sensor
cylinder, we clear the sensor data.
[Dirk Hohndel: this conflicted with some of my recent changes - I think
I merged things correctly...]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We used to have the rule that a dive trip has to have all dives in it in
sequential order, even though our XML file really is much more flexible,
and allows arbitrary nesting of dives within a dive trip.
Put another way, the old model had fairly inflexible rules:
- the dive array is sorted by time
- a dive trip is always a contiguous slice of this sorted array
which makes perfect sense when you think of the dive and trip list as a
physical activity by one person, but leads to various very subtle issues
in the general case when there are no guarantees that the user then uses
subsurface that way.
In particular, if you load the XML files of two divers that have
overlapping dive trips, the end result is incredibly messy, and does not
conform to the above model at all.
There's two ways to enforce such conformance:
- disallow that kind of behavior entirely.
This is actually hard. Our XML files aren't date-based, they are
based on XML nesting rules, and even a single XML file can have
nesting that violates the date ordering. With multiple XML files,
it's trivial to do in practice, and while we could just fail at
loading, the failure would have to be a hard failure that leaves the
user no way to use the data at all.
- try to "fix it up" by sorting, splitting, and combining dive trips
automatically.
Dirk had a patch to do this, but it really does destroy the actual
dive data: if you load both mine and Dirk's dive trips, you ended up
with a result that followed the above two technical rules, but that
didn't actually make any *sense*.
So this patch doesn't try to enforce the rules, and instead just changes
them to be more generic:
- the dive array is still sorted by dive time
- a dive trip is just an arbitrary collection of dives.
The relaxed rules means that mixing dives and dive trips for two people
is trivial, and we can easily handle any XML file. The dive trip is
defined by the XML nesting level, and is totally independent of any
date-based sorting.
It does require a few things:
- when we save our dive data, we have to do it hierarchically by dive
trip, not just by walking the dive array linearly.
- similarly, when we create the dive tree model, we can't just blindly
walk the array of dives one by one, we have to look up the correct
trip (parent)
- when we try to merge two dives that are adjacent (by date sorting),
we can't do it if they are in different trips.
but apart from that, nothing else really changes.
NOTE! Despite the new relaxed model, creating totally disjoing dive
trips is not all that easy (nor is there any *reason* for it to be
easty). Our GUI interfaces still are "add dive to trip above" etc, and
the automatic adding of dives to dive trips is obviously still based on
date.
So this does not really change the expected normal usage, the relaxed
data structure rules just mean that we don't need to worry about the odd
cases as much, because we can just let them be.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This commit makes deco handling in Subsurface more compatible with the way
libdivecomputer creates the data. Previously we assumed that having a
stopdepth or stoptime and no ndl meant that we were in deco. But
libdivecomputer supports many dive computers that provide the deco state
of the diver but with no information about the next stop or the time
needed there. In order to be able to model this in Subsurface this adds an
in_deco flag to the samples. This is only stored to the XML file when it
changes so it doesn't add much overhead but will allow us to display some
deco information on dive computers like the Atomic Aquatics Cobalt or many
of the Suuntos (among others).
The commit also removes the old event based deco code that was commented
out already. And fixes the code so that the deco / ndl information is
stored for the very last sample as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
dive.c:merge_text():
When "or" is translated into other languages it may be longer than 2 letters,
therefore there is a need for a slightly larger buffer to be reserved.
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
dive.c:
merge_cyclinder_type() can cause a small memory leak if two cylinder types
are about to be merged, but the redundant one has a "description" string
allocated.
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
dive.c:
A dive computer may have its model allocated in memory.
Let's clear that as well when calling free_dc().
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This patch makes a couple of modifications:
1) divelist.c:delete_single_dive() now tries to free all memory associated
with a dive, such as the string values for divemaster, location, notes &
etc.
2) dive.c:merge_text(), now always makes a copy in memory for the returned
string - either combined or one of the two which are passed
to the function.
The reason for the above two changes is that when (say) importing the same
data over and over, technically a merge will occur for the contained dives,
but mapped pointers can go out of scope.
main.c:report_dives() calls try_to_merge() and if succeeds the two dives
that were merged are deleted from the table. when we delete a dive,
we now make sure all string data is cleared with it, but also in the actual merge
itself, which precedes, copies of the merged texts are made (with merge_text()),
so that the new, resulted dive has his own text allocations.
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We had logic to remove duplicate dive computer information after merging
dives, but it didn't actually work.
Why? Because we had used the 'res' dive computer pointer to traverse the
list of dive computers, so it no longer actually pointed to the first
dive computer in the result list any more, and so the "remove redundant"
code only removed redundant dive computers from a limited and incomplete
list.
Oops.
Also, before checking the whole event and sample list, check if it's the
exact same dive computer using our new "match_one_dc()" helper function,
and don't even bother checking for sample details if it is.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
If we havd divecomputer model and dive ID information available, use
that to match existing dives when trying to merge them.
Otherwise fall back to the fuzzy time-based merging logic.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
I keep forgetting to do that.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
I foolishly changed visible_columns in both the (ill-named) cns branch and
master...
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
Conflicts:
divelist.c
gtk-gui.c
profile.c
|
|
We either pick the CNS reported by the dive computer at the end of the
dive, or the maximum of that and the CNS values in the samples, if any.
As usual, this column in the dive list defaults to off and it is
controlled by a setting in the tec page of the preferences.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Now we can simply remember the state of all the preferences at the
beginning of preferences_dialog() and restore them if the user presses
'Cancel'.
Fixes #21
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This adds the new members to the sample structure and fills them from
supported dive computers (Uemis SDA and OSTC / Shearwater Predator,
assuming you have libdivecomputer 0.3).
Save relvant values of this to the XML file and load it back. Handle the
new fields when merging dives.
At this stage we don't DO anything with this, all we do is extract them
from the dive computer, save them to the XML file and load them back.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
From 178a3f0d6d5112f76943fec5f8c1c1f3b173a7f4 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Fri, 7 Dec 2012 09:34:18 -0800
Subject: [PATCH 2/2] Tune the dive joining surface event insert code
So this makes us do surface events only if the samples are more than one
minute apart, and are shallow enough (randomly selected at 5m).
We can add more heuristics. Maybe we should compare the 1-minute sample
time limit of the previous sample to the time to the sample before that:
if some computer (or manually entered dive) has a long time between
*all* samples, we'd make the cut-off time longer.
Baby steps.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This doesn't really change the logic of the merging, but it does mean
that the end result tends to be less unexpected: when downloading dives
that end up being merged with pre-existing dives (because you have
multiple dive computers, for example), the newly downloaded dive data
will tend to be appended to the old dive data, rather than showing up
first.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Handle the case where we have no divecomputer information.
Reported-by: Sergey Starosek <sergey.starosek@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This actually makes us internally use 'micro-degrees' for latitude and
longitude, and we never turn them into floating point either at parse
time or save time.
That said, the Uemis downloader internally does still use atof() when
converting things, which is likely a bug (locale issues and all that),
but I'll ask Dirk to check it out.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This tunes the heuristics for when to merge two dives together into a
single dive. We used to just look at the date, and say "if they're
within one minute of each other, try to merge". This looks at the
actual dive data, and tries to see just how much sense it makes to
merge the dive.
It also checks if the dives to be merged use different dive computers,
and if so relaxes the one minute to five, since most people aren't
quite as OCD as I am, and don't tend to set their dive computers quite
that exactly to the same time and date.
I'm sure people can come up with other heuristics, but this should
make that easier too.
NOTE! If you have things like wrong timezones etc, and the
divecomputer dates are thus off by hours rather than by a couple of
minutes, this will still not merge them. For that kind of situation,
we'd need some kind of manual merge option. Note that that is *not*
the same as the current "merge two adjacent dives" together, which
joins two separate dives into one *longer* dive with a surface
interval in between.
That kind of manual merge UI makes sense, but is independent of this
partical change.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
When picking the "better" trip, we stupidly looked not at the trip
location, but at the _dive_ location.
Which obviously didn't actually pick the "better" trip information at
all, since it never actually looked at the trip itself.
Oops.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Commit bb6b6b49a6d4 "Start merging dives by keeping the dive computer data
from both dives" created a compile time warning. This simply adds an #if /
Yes, this might accelearate bit rod in the code, but I just dislike the
warning message when compiling Subsurface.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Now that we have dive computer device ID fields etc, we can do a much
better job of merging the dive computer data.
The rule is
- if we actually merge two disjoint dives (ie extended surface interval
causing the dive computer to think the dive ended and turning two of
those dives into one), find the *matching* dive computer from the
other dive to combine with.
- if we are merging dives at the same time, discard old-style data with
no dive computer info (ie act like a re-download)
- if we have new-style dive computers with identifiers, take them all.
which seems to work fairly well.
There's more tweaking to be done, but I think this is getting to the
point where it largely works.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Also, note that we do *not* do the "find_sample_offset()" any more when
we merge two dives that happen at the same time - since we just keep
both sets of dive computer data around.
But we keep the function to find the best offset around, because we may
well want to use it later when *showing* the dive, and trying to match
up the different sample data from the multiple dive computers associated
with the dive.
Because of that, this causes warnings about the now unused function.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
When we have a preferred dive computer that overrides old information
when merging two dives, we just copy the dive computer data over.
However, we need to clear the source of the dive computer data so that
we then don't free the sample data when that old source of the newly
merged dive gets free'd.
This fixes a memory scribble (and likely SIGSEGV) for the "prefer
downloaded" case.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
For now we only have one fixed divecomputer associated with each dive,
so this doesn't really change any current semantics. But it will make
it easier for us to associate a dive with multiple dive computers.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We used to avoid some extra allocations by just allocating the dive
samples as part of the 'struct dive' allocation itself, but that ends up
complicating things, and will make it impossible to have multiple
different sets of samples (for multiple dive computers).
So stop doing it. Just allocate the dive samples array separately.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
When this was first implemented the assumption was that a downloaded dive
that is to be merged with an existing dive would have the same time stamp.
But as Linus pointed out even back then, this does fail if a dive has been
merged with a download from a different dive computer before (think:
download from computer a, then download same dive from b, then improve
something in the parsing from computer a and try to redownload; the time
stamp could have changed).
This commit also fixes a silly omission in the merge_dives() function
(which ended up ALWAYS prefering the downloaded dive) and finally
implements the necessary changes to mark dives downloaded from a Uemis SDA
as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Most of the dive computers I have access to don't do the whole surface
event thing at the beginning or the end of the dive, so when you merge
two consecutive dives, you got this odd merged dive where the diver
spent the time in between at a depth of 1.2m or so (whatever the dive
computer "I'm now under water" depth limit happens to be).
Don't do that. Add surface events at the end of the first dive to be
merged, and the beginning of the second one, so that the time in between
dives is properly marked as being at the surface.
The logic for "time in between dives" is a bit iffy - it's "more than 60
seconds with no samples". If somebody has dive computers with samples
more than 60 seconds apart, this will break and we may have to revisit
the logic. But dang, that's some seriously broken sample rate.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This will hopefully not be something we need often, but if we improve
support for a divecomputer (either in libdivecomputer or in our native
Uemis code or even in the way we handle (and potentially discard) events),
then it is extremely useful to be able to say "re-download things
from the divecomputer and for things that were not edited in Subsurface,
don't try to merge the data (which gives BAD results if for example you
fixed a bug in the depth calculation in libdivecomputer) but instead
simply take the samples, the events and some of the other unedited data
straight from the download".
This commit implements just that - a "force download" checkbox in the
download dialog that makes us reimport all dives from the dive computer,
even the ones we already have, and an "always prefer downloaded dive"
checkbox that then tells Subsurface not to merge but simply to take the
data from the downloaded dive - without overwriting the things we have
already edited in Subsurface (like location, buddy, equipment, etc).
This, as a precaution, refuses to merge dives that don't have identical
start times. So if you have edited the date / time of a dive or if you
have previously merged your dive with a different dive computer (and
therefore modified samples and events) you are out of luck.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This just makes sure that the merged dive is properly selected, and
that we've saved the trip tree state so that the dive list repaints
nicely and with the newly merged dive selected after the merge.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This introduces the notion of merging two disjoint dives: you can select
two dives from the dive list, and if the selection is exactly two dives,
and they are adjacent (and share the same dive trip), we support the
notion of merging the dives into one dive.
The most common reason for this is an extended surface event, which made
the dive computer decide that the dive was ended, but maybe you were
just waiting for a buddy or a student at the surface, and you want to
stitch together two dives into one.
There are still details to be sorted out: my Suunto dive computers don't
actually do surface samples at the beginning or end of the dive, so when
you stitch two dives together, the profile ends up being this odd "a
couple of feet under water between the two parts of the dive" thing.
But that's an independent thing from the actual merging logic, and I'll
work on that separately.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This just re-organizes the dive merging code so that we expose a new
"merge_dives(a, b, offset)" function that merges two dives together into
one with the samples (and events) of 'b' at the specified offset after
'a'.
We'll want to use this if a dive computer has decided that the dive
ended (due to a pause at the surface), but we really want to just turn
the two computer dives into one long one with an extended surface swim.
No functional changes, but some independent cleanups due to the trip
simplifications.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We don't change any of the samples, we just don't plot (or consider for
dive time / mean calculations) the samples at the beginning or end of the
dive that are less than a certain threshold under water. Right now that's
an arbitrary 75cm which seems to Do The Right Thing(tm) for the dives I
tried this with - but I'm happy to look at other values if this causes
problems for people with dive computers I do not have access to.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Add the bookmark and heading events to the list of events not to be
simplified just because they are redundant - in both cases they are
about the user doing something explicit (like the gaschange), so even
if the data is otherwise identical, they should likely be saved.
That said, both events are kind of pointless (we don't actually seem
to save the heading value for the heading events, and bookmarks are
universally just due to user error in at least my case). But still..
This overly aggressive filtering was introduced in commit 6ad73a8f043b
("Improve logic handling events").
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This adds a couple of helper functions to manage dive trips
("add_dive_to_trip()" and "remove_dive_from_trip()") and makes those
functions do the trip statistics maintenance (trip beginning times,
number of dives, etc).
This was needed because the dive merge cases for multiple dive
computers showed some rather nasty special cases: especially if the
new dive information has been loaded into an XML file with trips
auto-generated, merging several of these kinds of xml files with
multiple dives in several overlapping trips would completely confuse
our previous code.
In particular, auto-generated trips that had the exact same date as
previous trips (because they were generated from the same dive
computer) really confused the code that used the trip timestamp to
manage the trips.
Adding the helper functions allows us to get the general case right
without having to have each piece of code that handles trip
information having to bother about all the odd rules. It will
eventually also allow us to make the dive trip data structures more
logical: right now the dive trip list is largely designed around the
odd gtk model handling, rather than some more higher-level conceptual
relationship with the actual dives.
But for now, this keeps all the data structures unchanged, and just
modifies them using the new helper functions.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This requires a patched libdivecomputer that can return salinity of the
water the dive was conducted in. Experimental patches exist that implement
this for the OSTC. The code is designed so that it simply defaults to salt
water if libdivecomputer doesn't include the feature.
The patch also fixes the dive merge code to merge two other recent
additions to the dive structure (surface_pressure and visibility).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
When we merge dives where the samples have come from different dive
computers, the samples may be offset from each other due to the dive
computers not having decided that the dive starts at quite the same
time.
For example, some dive computers may take a while to wake up when
submerged, or there may be differences in exactly when the dive
computer decides that a dive has started. Different computers tend to
have different depths that they consider the start of a real dive.
So when we merge two dives, look for differences in the sample data,
and search for the sample time offset that minimizes the differences
(logic: minimize the sum-of-square of the depth differences over a
two-minute window at the start of the dive).
This still doesn't really result in perfect merges, since different
computers will give slightly different values anyway, but it improves
the dive merging noticeably. To the point that this seems to have
found a bug in our Uemis data import (it looks like the Uemis importer
does an incorrect saltwater pressure conversion, and the data is
actually in centimeter, not in pressure).
So there is room for improvement, but this is at least a reasonable
approximation and starting point.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This enables plotting the ceiling in deco dives and also adds the
necessary code to the uemis importer. The only other dive computer this
has been tested with the OSTC and that needs a libdivecomputer patch in
order to provide the deco/ceiling information to Subsurface.
Fixes #5
|
|
We now throw away redundant events, just as we throw away other redundant
data coming from the dive computer. Events are considered redundant if
they are less than 61 seconds apart and identical.
This also improves the display of the remaining events in the profile as
we now show the value of the event, if it is present (for example for a
deco event we show the duration of the deepest stop).
Finally, for events that define a range (so they set the beginning flag
and assume and end flag some time later) we no loger show the triangle but
assume that some other code handles visualizing them (as happens for the
ceiling events).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|