Age | Commit message (Collapse) | Author |
|
When run with -v option, this prints local file names like the path
to the local git repository and the hash file.
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
These were left unused by cleaning up the import function parameters.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Continuing the crusade against excessive number of parameters for some
functions. This should be the last of the import functions to be cleaned
up.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This removes the excessive amount of parameters on manual CSV import. We
just use appropriate string array than can be directly fed to XSLT
parsing.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
There was a bug in MKVI download tool that resulted in erroneous sample
times. This fix takes care of that and should work similarly as the
vendor's own.
Fixes #916
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
If we are trying to open an empty file, skip the attempts to parse the
input and report an appropriate error message.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
If we are running Subsurface in verbose mode, print the xsltproc command
line to test the XSLT manually.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
When Subsurface is run with high enough verbosity level, generate
command line to test Seabear import manually with xsltproc.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
It seems that adding parameters to the Seabear import has resulted in
the parameters to be missed. And this led to parsing error / recursion.
This patch tries to tackle the problem by introducing dynamic parameter
counter to the mix.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We do not really need the buffers when doing CSV import. Instead we use
dynamic memory allocation for the values. Note that Seeabear part is not
tested as it that import is not working for me anymore.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We need to start second search from the start of the buffer if \r\n
search returns nothing.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
The import of setpoint values is tested with Seabear data.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This parses the basic metadata of a dive when importing Divinglog
database. (Discarding deleted dives is my best guess as I do not have
that in my sample dive.)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Remove several very noise messages on dive site handling (this seems to
work well now, so I think we can remove most of them - a couple were left
that indicate actual issues).
And also remove all the calls to "translate" when outputting data to
stderr. Error messages that indicate issues where the user will basically
have to come and ask the developers for help shouldn't be localized. They
should be in English to make it easier for us to figure out what's going
on.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Instead of replacing all the empty model tags after a csv to xml
transform with some text, just produce that text in the csv to xml
transform instead.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This adds support for importing individual O2 sensors from a CSV file,
e.g. an APD log viewer file.
Signed-off-by: Anton Lundin <glance@acc.umu.se>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
In each case there are scenarios where we would have dereferenced NULL.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Free memory returned from parse_mkvi_value()
Free memory returned from printGPSCoords()
Free memory allocated in added_list and removed_list
Free memory allocated when adding suffix to dive site name
Free memory allocated in cache_deco_state()
Free memory allocated in build_filename()
Free memory allocated in get_utf8()
Free memory allocated in alloc_dive()
Free memory allocated as cache but never used
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This was a poorly implemented hack when we executed the reverse geo lookup
in the main thread and opening a V2 file could take a very long time. We
need to do the "Welcome" message quite differently.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This creates the basis to allow other backends to be used with the cloud
storage infrastructure.
So far this should all just transparently continue to work. A user would
have to manually add the cloud_base_url entry to the CloudStorage section
in their config file in order to use a different backend server.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Once we failed to load data from cloud storage (for example the first time
we try to use it when the remote repository is empty), don't show git
related errors to the user. It's enough to tell them that the cloud
storage is empty.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
The lower level functions will already report that things didn't connect
successfully, no reason to repeat it here (which then exposes the git
URL).
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Also change the name of the enum and make sure all the inner functions get
passed the remote transport information.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This change once again tests if the remote can be reached. Even with a
fairly big data file and a medium speed internet connection the remote
sync is fast enough to call it nearly instantaneous. Maybe a couple of
seconds.
We may need more checks / different heuristics / warnings if the sync
didn't happen, etc. But for now this should allow more reasonable testing.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Fixes #853
[Dirk Hohndel: fixed test compile]
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This snuck in on commit f4f791ffbdd2 ("Add verbose debug output to
parse_manual_file").
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
OSTCTools is a windows based software by Robert Angeymar which performs
configuration upgrade, memory analysis and download tasks for H&W OSTC
devices.
Downloaded dives are stored in files (one archive each) with the raw
binary data heavily padded at the begining of the file, and some other
data not included in H&W dive header protocol as the device's serial
number.
The import function simply takes the raw data part of the file and lets
libdivecomputer do the parseing.
Then adds some additional info as OSTC reported dive number and serial
device number.
Please note that OSTCTools is *not* a real logging software, it simply
gets the DC raw data, so there isn't any information about dive site,
equipment and so.
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Move extern declaration of function datatrak_import() to file.h, where it
fits better than in file.c
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
NL should be set only if there is an empty line in the input file. That
way the return if no empty line exists (simplistic test for Seabear CSV
file) makes more sense.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This add support for Seabear's new import format that is used by H3 and
T1. In the future also the Hudc should switch to the new format. The
main difference to the old one is that time stamps are no longer
recorded in the samples, but intervali is specified in the header.
The header contains other useful information as well that we should
build support for. E.g. surface pressure, gas mixes, GF, and mode might
be useful additions later on.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We need to take both \r\n and \n new line formats into account before
failing Seabear validity check on import. (Seabear file has empty lines,
if import file does not have one -> return.)
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Simple control flow issue. Missing break in switch.
Signed-off-by: Marcos CARDINOT <mcardinot@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This way it's easier to figure out which arguments to use when creating
tests.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
In commit 0ed4356fc28b ("Display slowness warning before opening a V2
file") I changed the xml parser to abort if it detects an V2 XML file so
that the main application can show a warning message (and eventually, a
few choices for the user) before parsing and processing older XML files.
The input that gets pre-processed by XSLT files claims to be V2 XML and so
the parser aborts - yet the code to show a warning and restarting the
parse isn't present in this code path, so XSLT based imports always fail.
This hack works around this by temprarily setting the variable that claims
that the warning has been shown to the user.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
- subsurface_fopen() is needed
- translated one puts("") message
- sizeof(unsigned char | char) is always 1
- pointer symbol (*) location
- padding fixes
- braces fixes
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Sequentially parses a file, expected to be a Datatrak/WLog divelog, and
converts the dive info into Subsurface's dive structure.
As my first DC, back in 90s, was an Aladin Air X, the obvious choice of log
software was DTrak (Win version). After using it for some time we moved to WLog
(shareware software more user friendly than Dtrak, printing capable, and still
better, it runs under wine, which, as linux user, was definitive for me). Then,
some years later, my last Aladin died and I moved to an OSTC, forcing me to
look for a software that support this DC.
I found JDivelog which was capable of import Dtrak logs and used it for some
time until discovered Subsurface existence and devoted to it.
The fact was that importing Dtrak dives in JDivelog and then re-importing them
in Subsurface caused a significant data loss (mainly in the profile events and
alarms) and weird location of some other info in the dive notes (mostly tag
items in the original Dtrak software). This situation can't actually be solved
with tools like divelogs.de which causes similar if no greater data loss.
Although this won't be a core feature for Subsurface, I expect it can be useful
for some other divers as has been for me.
Comments and issues:
Datatrak/Wlog files include a lot of diving data which are not directly
supported in Subsurface, in these cases we choose mostly to use "tags".
The lack of some important info in Datatrak archives (e.g. tank's initial
pressure) forces us to do some arbitrary assumptions (e.g. initial pressure =
200 bar).
There might be archives coming directly from old DOS days, as first versions
of Datatrak run on that OS; they were coded CP437 or CP850, while dive logs
coming from Win versions seems to be coded CP1252. Finally, Wlog seems to use a
mixed confusing style. Program directly converts some of the old encoded chars
to iso8859 but is expected there be some issues with non alphabetic chars, e.g.
"ª".
There are two text fields: "Other activities" and "Dive notes", both limited to
256 char size. We have merged them in Subsurface's "Dive Notes" although the
first one could be "tagged", but we're unsure that the user had filled it in
a tag friendly way.
WLog adds some information to the dive and lets the user to write more than
256 chars notes. This is achieved, while keeping compatibility with DTrak
divelogs, by adding a complementary file named equally as the .log file and
with .add extension where all this info is stored. We have, still, not worked
with this complementary files.
This work is based on the paper referenced in butracker #194 which has some
errors (e.g. beginning of log and beginning of dive are changed) and a lot of
bytes of unknown meaning. Example.log shows, at least, one more byte than those
referred in the paper for the O2 Aladin computer, this could be a byte referred
to the use of SCR but the lack of an OC dive with O2 computer makes impossible
for us to compare.
The only way we have figured out to distinguish a priori between SCR and non
SCR dives with O2 computers is that the dives are tagged with a "rebreather"
tag. Obviously this is not a very trusty way of doing things. In SCR dives,
the O2% in mix means, probably, the maximum O2% in the circuit, not the O2%
of the EAN mix in the tanks, which would be unknown in this case.
The list of DCs related in bug #194 paper seems incomplete, we have added
one or two from WLog and discarded those which are known to exist but whose
model is unknown, grouping them under the imaginative name of "unknown". The
list can easily be increased in the future if we ever know the models
identifiers.
BTW, in Example.log, 0x00 identifier is used for some DC dives and from my own
divelogs is inferred that 0x00 is used for manually entered dives, this could
easily be an error in Example.log coming from a preproduction DC model.
Example.log which is shipped in datatrak package is included in dives
directory for testing pourposes.
[Dirk Hohndel: some small cleanups, merged with latest master, support
divesites, remove the pointless memset() before free() calls
add to cmake build]
Signed-off-by: Salvador Cuñat <salvador.cunat@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This increases the limits when parsing CSV files with dive profiles,
allowing us to import bigger files in one go.
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Fixes #814
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Add a new line when importing a CSV file, so we get the last dive
imported as well (in case file ends without last new line).
See #814
Signed-off-by: Miika Turkia <miika.turkia@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
The dive computer model string should indicate that these were imported
from CSV.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
... and repair a failed rebase (sorry).
Signed-off-by: Robert C. Helling <helling@atdotde.de>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|