Age | Commit message (Collapse) | Author |
|
.. and add the usual logic to not save the default values.
This also simplifies the initial system-specific setup of both of these:
since we have defaults for all the preferences that get set up at
startup, we can just initialize those defaults to the system-specific
fonts then and there.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This was necessary for the Uemis downloader when we used the SDA file
format as intermediary data format and imported that as XML buffer.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
For now we only have one fixed divecomputer associated with each dive,
so this doesn't really change any current semantics. But it will make
it easier for us to associate a dive with multiple dive computers.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
We used to avoid some extra allocations by just allocating the dive
samples as part of the 'struct dive' allocation itself, but that ends up
complicating things, and will make it impossible to have multiple
different sets of samples (for multiple dive computers).
So stop doing it. Just allocate the dive samples array separately.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
When this was first implemented the assumption was that a downloaded dive
that is to be merged with an existing dive would have the same time stamp.
But as Linus pointed out even back then, this does fail if a dive has been
merged with a download from a different dive computer before (think:
download from computer a, then download same dive from b, then improve
something in the parsing from computer a and try to redownload; the time
stamp could have changed).
This commit also fixes a silly omission in the merge_dives() function
(which ended up ALWAYS prefering the downloaded dive) and finally
implements the necessary changes to mark dives downloaded from a Uemis SDA
as well.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
The default filename handling is broken in two different ways:
(a) if we start subsurface with a non-existing file, we warn about
the inability to read that file, and then we exit without setting the
default filename.
This is broken because it means that if the user (perhaps by mistake,
by pressing ^S) now saves the file, he will overwrite the default
filename, even though that was *not* the file we read, and *not* the
file that subsurface was started with.
So just set the default filename even for a failed file open.
The exact same logic is true of a failed parse of an XML file that we
successfully opened. We do *not* want to leave the old default
filename in place just because the XML parsing failed, and possibly
then overwriting some file that was never involved with that failure
in the first place. So just get rid of all the logic to push the
filename saving into the XML parsing layer, it has zero relevance at
that point.
(b) if we do replace the default filename with a NULL file, we need
to set that even if we cannot do a strdup() on the NULL.
This fixes both errors.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This is just the first step - convert the string literals, try to catch
all the places where this isn't possible and the program needs to convert
string constants at runtime (those are the N_ macros).
Add a very rough first German localization so I can at least test what I
have done. Seriously, I have never used a localized OS, so I am certain
that I have many of the 'standard' translations wrong. Someone please take
over :-)
Major issues with this:
- right now it hardcodes the search path for the message catalog to be
./locale - that's of course bogus, but it works well while doing initial
testing. Once the tooling support is there we just should use the OS
default.
- even though de_DE defaults to ISO-8859-15 (or ISO-8859-1 - the internets
can't seem to agree) I went with UTF-8 as that is what Gtk appears to
want to use internally. ISO-8859-15 encoded .mo files create funny
looking artefacts instead of Umlaute.
- no support at all in the Makefile - I was hoping someone with more
experience in how to best set this up would contribute a good set of
Makefile rules - likely this will help fix the first issue in that it
will also install the .mo file(s) in the correct place(s)
For now simply run
msgfmt -c -o subsurface.mo deutsch.po
to create the subsurface.mo file and then move it to
./locale/de_DE.UTF-8/LC_MESSAGES/subsurface.mo
If you make changes to the sources and need to add new strings to be
translated, this is what seems to work (again, should be tooled through
the Makefile):
xgettext -o subsurface-new.pot -s -k_ -kN_ --add-comments="++GETTEXT" *.c
msgmerge -s -U po/deutsch.po subsurface-new.pot
If you do this PLEASE do one commit that just has the new msgid as
changes in line numbers create a TON of diff-noise. Do changes to
translations in a SEPARATE commit.
- no testing at all on Windows or Mac
It builds on Windows :-)
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
Only files that are opened should be considered r/w. Files that are
imported should be treated as if they were r/o.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
On Windows, the GLib wrappers for fopen() and open() deal with the UTF-8
format used for file names when we have to open or save a file with
unicode characters in its name.
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
This makes the time type unambiguous, and we can use G_TYPE_INT64 for it
in the divelist too.
It also implements a portable (and thread-safe) "utc_mkdate()" function
that acts kind of like gmtime_r(), but using the 64-bit timestamp_t. It
matches our original "utc_mktime()".
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The user may not have created it, yet.
Signed-off-by: Dirk Hohndel <dirk@hohndel.org>
|
|
GTK messes up the standard C library locales by default (instead of just
taking locale information into account internally). Which breaks
'strtod()' and 'printf()' etc. Since they screwed that up, they then
added helper functions for undoing that braindamage. Use it.
I'd like to blame the GTK people, but the standard C libary people bear
*some* responsibility for this. One of the reasons why people do not
use "setlocale()" in many normal programs is exactly because it messes
up core libc functionality - with number conversion being the main
thing.
Doing things like converting numbers in a locale-specific manner is
something people do want to do, but not *always*. So the C library
locale code should always had defaulted to C locale, with some *extra*
marker (like a printf/scanf modifier) to say "print/scan in the current
locale".
Because many things absoilutely need to be non-localized. You don't
want your internal file format to magically change just because you want
to show things to the user in France, for example.
Reported-by: Ivan Habunek <ivan.habunek@gmail.com>
Root-caused-by: Jef Driesen <jefdriesen@telenet.be>
Cc: Dirk Hohndel <dirk@hohndel.org>
Cc: Lubomir I. Ivanov <neolit123@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
O_TEXT is the default mode for fctrl's open() and on windows created
files, line endings are counted by fstat() as CR+LF adding an extra
byte for each line. the result from this is that, while the file still
can be read into a buffer, the read() return (ret) has a different
size compared to the previously allocated buffer, breaking at:
if (ret == mem->size)
a solution is to open() the file in O_BINARY mode, which should
technically suppress the EOL translation.
Signed-off-by: Lubomir I. Ivanov <neolit123@gmail.com>
[ Fixed to work under real operating systems that don't need this crap.
"Here's a nickel, kid, go and buy a real OS". - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
In file.c::readfile() the file was being opened once at fd declaration
time and then again a few lines later and only being closed once. Remove
the open() at fd declaration time leaving the later one where the fd check
is done.
Signed-off-by: Andrew Clayton <andrew@digital-domain.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The cochran CSV pressure data is actually in units of '4 psi', not in
just psi. That seems to be the resolution cochran internally keeps
things in, and unlike the depth reading there's no conversion to
standard units in the export (for depth, the quarter-foot depth
resolution is converted to tenths of feet when exporting).
Yeah, none of this makes any sense to me either, but I knew it was the
case. I had just forgotten that factor-of-four when I did the importer.
With this fix, I get the same subsurface data (modulo some rounding
differences particularly for temperature) whether I go through David
McNett's UDDF converter, or just import the CSV data directly.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The Cochran Analyst software can export the basic dive information as
CSV files (comma-separated values).
Individual CSV files contain just one particular type of information:
depth, temperature or cylinder pressure, which is rather inconvenient.
However, the way subsurface works, you can just import these CSV files
all as individual dives, and then subsurface will automatically merge
the dives with the same date and time - and in the process it will also
merge all the samples.
So it turns out that we don't really need any special handling. You can
literally just do
subsurface <list-your-cochran-export-files-here>
and you're all done.
Of course, the CSV files really *are* pretty useless, since they don't
contain all the nice information about where the dive took place etc.
So you literally just get the dive profile. But that's better than
getting nothing at all.
I'd love to actually be able to parse the real native Cochran Analyst
software CAN files, but in the meantime this is at least a starting
point. And if I'm ever able to parse those nasty CAN-files, this makes
comparisons with the exports much easier.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
It's broken, and currently only writes out a debug output file per dive.
I'm not sure I'll ever really be able to decode the mess that is the
Cochran ANalyst stuff, but I have a few test files, along with separate
depth info from a couple of the dives in question, so in case this ever
works I can at least validate it to some degree.
The file format is definitely very intentionally obscured, though.
Annoying. It's not like the Cochran software is actually all that good
(it's really quite a horribly nasty Windows-only app, I'm told).
Cochran Analyst is very much not the reason why people would buy those
computers. So Cochran making their computers harder to use with other
software is just stupid.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Most of the parsers will want the content in memory, so keep them
simple. The fact that the Suunto parser uses "libzip" that has to
re-open the file is annoying and causes us to re-open the file etc.
But it's the odd man out, so don't design the "open_by_filename()"
function around it. Pretty much everybody else will want to avoid
having to cook up their own IO routines.
Also, when reading the file, NUL-terminate the buffer. This allows us
to just treat text files as large strings if we want to, and doesn't
matter for binary files (we still pass in the length explicitly).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
I apparently was so congested that it affected my typing too when I
wrote that, and then copy-paste meant that the use and declaration
matched despite the misspelling.
Reported-by: Henrik Brautaset Aronsen <subsurface@henrik.synth.no>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
You need to have libzip-devel installed, and pkg-config needs to know about it
for the build to pick up on it.
On at least Fedora, a simple "yum install libzip-devel" will make things
work, although you may need to force a rebuild of subsurface too (the
"file.o" file in particular - the Makefile doesn't track system
dependencies).
Then, you can just do
subsurface my-dives.SDE
to read the data directly from the SDE file.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
We're going to eventually import non-xml files too, so let's begin
splitting the logic up.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|