Hi all!I am trying to reinstall a package that I was previously able to install and use. I was building a package of my own after my computer unexpectedly restarted and then I started to have problems loading the rpgraph package. So I decided to uninstall it and to reinstall it.
Having the same issue in the xlsx package. Fairly confident it is related to security updates that were released for Ubuntu in the last few days (6/19, I think). I fortunately had not installed the updates on my computer yet when the, so I was able to confirm that installing the updates is what broke the package install.Further, I have confirmed that the.jpackage function is what is causing the segfault. It happens whether or not I use.jpackage('xlsx') or.jpackage('xlsxjars'), which have different jars included (so I do not suspect the Java code in particular). To make matters more interesting, the segfault only seems to occur when.jpackage is called during.onLoad or.onAttach, and specifically when the package is being installed - calling it manually works after the package is installed.I would suggest that this is a fairly high priority bug, as I would expect it to affect all Ubuntu OS's and all packages that use rJava with jpackage called in.onLoad or.onAttach. I would suggest that this is a fairly high priority bug, as I would expect it to affect all Ubuntu OS's and all packages that use rJava with jpackage called in.onLoad or.onAttach.Absolutely!
Segmentation fault bug affects also RJDBC - we cannot access our Hive DB any more after latest OS updates (incl. Linux kernel).It tested with latest Debian, Centos, Ubuntu OSes and JDK 8 (Oracle JDK and OpenJDK) the error persists.A workaround seems to be using older JDK version 1.7.095 but I'm still testing.This does not work either:-( or to be precise RDJBC still fails in 57% of cases with Segmentation fault. Increasing stack size of the JVM by setting -Xss2560k Java option did the trick for mein order to get RJDBC back to work: options( java.parameters = c('-Xss2560k', '-Xmx2g') ).library(rJava)library(RJDBC).jinit(classpath=c(hive.jar.path, hadoop.jar.path)).There are couple of workarounds have been suggested in order to counteract Segmentation faults:.
Under some operating systems, a segmentation fault causes the crashed program to produce a core dump — a file with a snapshot of its state and memory at the time of crash. Progression 13 Need More Science.
the above mentioned workaround -. disabling Address Space Layout Randomization -,But it would be interesting to know why the bug fix of causes such issues and how to fix the root cause. I get the Segmentation fault with a fresh Siduction install (buster/sid) with kernel 4.14-6 (2017-11-30) during a call to JGR. Above stacktrace is a follow up error because Rengine does not give any result. So not the cause of the issue itself.Within JGR the Segmentation fault occurs in the moment, when library(rJava) (loading of default packages) takes place and next R command is sent.This can be also observed using the JRI/examples/run rtest2, there as well when calling library(rJava) followed by for exampe ajinit the JVM exists with Segmentation fault without any further stacktrace.Several Kernel versions have been used, from 4.9 up to 4.14.
When a segmentation fault occurs in Linux, the error message Segmentation fault (core dumped) will be printed to the terminal (if any), and the program will be terminated. As a C/C dev, this happens to me quite often, and I usually ignore it and move onto gdb, recreating my previous action in order to trigger the invalid memory reference again. Instead, I thought I might be able to perhaps use this 'core' instead, as running gdb all the time is rather tedious, and I cannot always recreate the segmentation fault.My questions are three:. Where is this elusive 'core' dumped?.
What does it contain?. What can I do with it? If other people clean up. You usually don't find nothing. But luckily Linux has a handler for this which you can specify at runtime. In you will find:/proc/sys/kernel/corepattern is used to specify a core dumpfile pattern name.
If the first character of the pattern is a ' ', the kernel will treatthe rest of the pattern as a command to run. The core dump will bewritten to the standard input of that program instead of to a file.According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd.
You may want to write your own handler or use the current directory. But what's in there?Now what it contains is system specific, but according to:A core dump consists of the recorded state of the working memory of a computerprogram at a specific time. In practice, other key pieces ofprogram state are usually dumped at the same time, including theprocessor registers, which may include the program counter and stackpointer, memory management information, and other processor andoperating system flags and information.
So it basically contains everything gdb ever wanted, and more. Yeah, but I'd like me to be happy instead of gdbYou can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump. You should then be able to continue business as usual and be annoyed by trying and failing to fix bugs instead of trying and failing to reproduce bugs. The core file is normally called core and is located in the current working directory of the process.
However, there is a long list of reasons why a core file would not be generated, and it may be located somewhere else entirely, under a different name. See the for details:DESCRIPTIONThe default action of certain signals is to cause a process toterminate and produce a core dump file, a disk file containing animage of the process's memory at the time of termination. This imagecan be used in a debugger (e.g., gdb(1)) to inspect the state of theprogram at the time that it terminated.
A list of the signals whichcause a process to dump core can be found in signal(7).There are various circumstances in which a core dump file is not produced:. The process does not have permission to write the core file.
(Bydefault, the core file is called core or core.pid, where pid isthe ID of the process that dumped core, and is created in thecurrent working directory.