aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* defconfig: enable rumble/leds on xbox360 controllers & regen defconfigMoyster2017-09-051-4/+26
|
* defconfig: Enable usb host driver for joystick/gamepadHemant Kumar2017-09-051-0/+2
| | | | | | | | | | Current kernel configuration does not support joystick/gamepads. Controllers like Logitech F710 requires this driver to work. CRs-Fixed: 758193 Change-Id: Id2e75e019eff3855b5fa2cc778093af68aef1511 Signed-off-by: Hemant Kumar <hemantk@codeaurora.org> Signed-off-by: Moyster <oysterized@gmail.com>
* kthread: Backport queuing_blocked()Petr Mladek2017-09-041-0/+12
| | | | | | | | | This patch backports the queuing_blocked() function from Linux mainline and places it into the kthread header so it is accessible everywhere. Signed-off-by: Alex Naidis <alex.naidis@linux.com> Signed-off-by: Joe Maples <joe@frap129.org>
* ARM: mm: Don't force swp_emulateChad Cormier Roussel2017-09-041-1/+1
| | | | Signed-off-by: Joe Maples <joe@frap129.org>
* FS: F2FS: Use jiffiesDorimanx2017-09-047-8/+9
| | | | Signed-off-by: Joe Maples <joe@frap129.org>
* iio: Add iio_push_buffers_with_timestamp() helperLars-Peter Clausen2017-08-311-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | Drivers using software buffers often store the timestamp in their data buffer before calling iio_push_to_buffers() with that data buffer. Storing the timestamp in the buffer usually involves some ugly pointer arithmetic. This patch adds a new helper function called iio_push_buffers_with_timestamp() which is similar to iio_push_to_buffers but takes an additional timestamp parameter. The function will help to hide to uglyness in one central place instead of exposing it in every driver. If timestamps are enabled for the IIO device iio_push_buffers_with_timestamp() will store the timestamp as the last element in buffer, before passing the buffer on to iio_push_buffers(). The buffer needs large enough to hold the timestamp in this case. If timestamps are disabled iio_push_buffers_with_timestamp() will behave just like iio_push_buffers(). Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Cc: Oleksandr Kravchenko <o.v.kravchenko@globallogic.com> Cc: Josh Wu <josh.wu@atmel.com> Cc: Denis Ciocca <denis.ciocca@gmail.com> Cc: Manuel Stahl <manuel.stahl@iis.fraunhofer.de> Cc: Ge Gao <ggao@invensense.com> Cc: Peter Meerwald <pmeerw@pmeerw.net> Cc: Jacek Anaszewski <j.anaszewski@samsung.com> Cc: Fabio Estevam <fabio.estevam@freescale.com> Cc: Marek Vasut <marex@denx.de> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
* crypto: memneq - fix for archs without efficient unaligned accessDaniel Borkmann2017-08-311-1/+2
| | | | | | | | | | | | Commit fe8c8a126806 introduced a possible build error for archs that do not have CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS set. :/ Fix this up by bringing else braces outside of the ifdef. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Fixes: fe8c8a126806 ("crypto: more robust crypto_memneq") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-By: Cesar Eduardo Barros <cesarb@cesarb.eti.br> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: more robust crypto_memneqCesar Eduardo Barros2017-08-312-30/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | Disabling compiler optimizations can be fragile, since a new optimization could be added to -O0 or -Os that breaks the assumptions the code is making. Instead of disabling compiler optimizations, use a dummy inline assembly (based on RELOC_HIDE) to block the problematic kinds of optimization, while still allowing other optimizations to be applied to the code. The dummy inline assembly is added after every OR, and has the accumulator variable as its input and output. The compiler is forced to assume that the dummy inline assembly could both depend on the accumulator variable and change the accumulator variable, so it is forced to compute the value correctly before the inline assembly, and cannot assume anything about its value after the inline assembly. This change should be enough to make crypto_memneq work correctly (with data-independent timing) even if it is inlined at its call sites. That can be done later in a followup patch. Compile-tested on x86_64. Signed-off-by: Cesar Eduardo Barros <cesarb@cesarb.eti.br> Acked-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: xts - fix compile errorsStephan Mueller2017-08-312-0/+2
| | | | | | | | Commit 28856a9e52c7 missed the addition of the crypto/xts.h include file for different architecture-specific AES implementations. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: xts - consolidate sanity check for keysStephan Mueller2017-08-3110-53/+57
| | | | | | | | | | | | | | | The patch centralizes the XTS key check logic into the service function xts_check_key which is invoked from the different XTS implementations. With this, the XTS implementations in ARM, ARM64, PPC and S390 have now a sanity check for the XTS keys similar to the other arches. In addition, this service function received a check to ensure that the key != the tweak key which is mandated by FIPS 140-2 IG A.9. As the check is not present in the standards defining XTS, it is only enforced in FIPS mode of the kernel. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel2017-08-311-1/+1
| | | | | | | | | | | | | | | | | | | | | The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Cc: stable@vger.kernel.org Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* arm64: crypto: assure that ECB modes don't require an IVJeremy Linton2017-08-311-2/+2
| | | | | | | | | ECB modes don't use an initialization vector. The kernel /proc/crypto interface doesn't reflect this properly. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/crypto: remove redundant update of dataColin Ian King2017-08-311-1/+0
| | | | | | | | | | | | | | | Originally found by cppcheck: [arch/arm64/crypto/sha2-ce-glue.c:153]: (warning) Assignment of function parameter has no effect outside the function. Did you forget dereferencing it? Updating data by blocks * SHA256_BLOCK_SIZE at the end of sha2_finup is redundant code and can be removed. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* scripts/tags.sh: catch 4.9-rc6Jaegeuk Kim2017-08-311-108/+131
| | | | Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
* arm64: mm: fix show_pte KERN_CONT falloutMark Rutland2017-08-311-3/+4
| | | | | | | | | | | | | | | | | | | | Recent changes made KERN_CONT mandatory for continued lines. In the absence of KERN_CONT, a newline may be implicit inserted by the core printk code. In show_pte, we (erroneously) use printk without KERN_CONT for continued prints, resulting in output being split across a number of lines, and not matching the intended output, e.g. [ff000000000000] *pgd=00000009f511b003 , *pud=00000009f4a80003 , *pmd=0000000000000000 Fix this by using pr_cont() for all the continuations. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* audit: Fix sleep in atomicJan Kara2017-08-311-2/+14
| | | | | | | | | | | | Audit tree code was happily adding new notification marks while holding spinlocks. Since fsnotify_add_mark() acquires group->mark_mutex this can lead to sleeping while holding a spinlock, deadlocks due to lock inversion, and probably other fun. Fix the problem by acquiring group->mark_mutex earlier. CC: Paul Moore <paul@paul-moore.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Paul Moore <paul@paul-moore.com>
* sound: soc: optimize for sizethecrazyskull2017-08-311-0/+2
| | | | | | | Change-Id: Ie2e60f58fb809c0173adcfab838da19e17775117 Signed-off-by: engstk <eng.stk@sapo.pt> Signed-off-by: Francisco Franco <franciscofranco.1990@gmail.com>
* cputime: Fix jiffies based cputime assumption on steal accountingFrederic Weisbecker2017-08-311-5/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The steal guest time accounting code assumes that cputime_t is based on jiffies. So when CONFIG_NO_HZ_FULL=y, which implies that cputime_t is based on nsecs, steal_account_process_tick() passes the delta in jiffies to account_steal_time() which then accounts it as if it's a value in nsecs. As a result, accounting 1 second of steal time (with HZ=100 that would be 100 jiffies) is spuriously accounted as 100 nsecs. As such /proc/stat may report 0 values of steal time even when two guests have run concurrently for a few seconds on the same host and same CPU. In order to fix this, lets convert the nsecs based steal delta to cputime instead of jiffies by using the right conversion API. Given that the steal time is stored in cputime_t and this type can have a smaller granularity than nsecs, we only account the rounded converted value and leave the remaining nsecs for the next deltas. Reported-by: Huiqingding <huding@redhat.com> Reported-by: Marcelo Tosatti <mtosatti@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* BACKPORT: FROMLIST: pids: make task_tgid_nr_ns() safeOleg Nesterov2017-08-313-15/+17
| | | | | | | | | | | | | | | | | | | | | | This was reported many times, and this was even mentioned in commit 52ee2dfdd4f5 "pids: refactor vnr/nr_ns helpers to make them safe" but somehow nobody bothered to fix the obvious problem: task_tgid_nr_ns() is not safe because task->group_leader points to nowhere after the exiting task passes exit_notify(), rcu_read_lock() can not help. We really need to change __unhash_process() to nullify group_leader, parent, and real_parent, but this needs some cleanups. Until then we can turn task_tgid_nr_ns() into another user of __task_pid_nr_ns() and fix the problem. Reported-by: Troy Kensinger <tkensinger@google.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> (url: https://patchwork.kernel.org/patch/9913055/) Bug: 31495866 Change-Id: I5e67b02a77e805f71fa3a787249f13c1310f02e2
* mtk: gps: derpMister Oyster2017-08-311-2/+2
|
* vmstat: squash 4 patches from upstreamFrancisco Franco2017-08-311-3/+5
| | | | | | | | | Partial merge: https://github.com/torvalds/linux/commit/7c8e0181e6e0b8079c4c2ce902bf52d7a2c6fa5d Merge: https://github.com/torvalds/linux/commit/bb0b6dffa2ccfbd9747ad0cc87c7459622896e60 Merge: https://github.com/torvalds/linux/commit/57c2e36b6f4dd52e7e90f4c748a665b13fa228d2 Partial merge: https://github.com/torvalds/linux/commit/176bed1de5bf977938cad26551969eca8f0883b1 Signed-off-by: Francisco Franco <franciscofranco.1990@gmail.com>
* defconfig enable ntfs driversMister Oyster2017-08-311-1/+3
|
* defconfig: enable exfat driversMister Oyster2017-08-311-0/+7
|
* fs: exfat: import latest dorimanx/exfat-nofuse driversMister Oyster2017-08-3129-0/+12655
|
* Apply CFLAGS, -Os to decompress.o to improve decompress performance during ↵flar22017-08-311-0/+3
| | | | | | boot-up process Signed-off-by: flar2 <asegaert@gmail.com>
* compiler-intel.h: Remove duplicate definitionPranith Kumar2017-08-311-3/+0
| | | | | | | | | barrier is already defined as __memory_barrier in compiler.h Remove this unnecessary redefinition. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Link: http://lkml.kernel.org/r/CAJhHMCAnYPy0%2BqD-1KBnJPLt3XgAjdR12j%2BySSnPgmZcpbE7HQ@mail.gmail.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* Make ARCH_HAS_FAST_MULTIPLIER a real config variableLinus Torvalds2017-08-315-6/+8
| | | | | | | | | | | | | | | | | | | | | It used to be an ad-hoc hack defined by the x86 version of <asm/bitops.h> that enabled a couple of library routines to know whether an integer multiply is faster than repeated shifts and additions. This just makes it use the real Kconfig system instead, and makes x86 (which was the only architecture that did this) select the option. NOTE! Even for x86, this really is kind of wrong. If we cared, we would probably not enable this for builds optimized for netburst (P4), where shifts-and-adds are generally faster than multiplies. This patch does *not* change that kind of logic, though, it is purely a syntactic change with no code changes. This was triggered by the fact that we have other places that really want to know "do I want to expand multiples by constants by hand or not", particularly the hash generation code. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* af_key: Add lock to key dumpYuejie Shi2017-08-311-7/+35
| | | | | | | | | | | | | | | | commit 89e357d83c06b6fac581c3ca7f0ee3ae7e67109e upstream. A dump may come in the middle of another dump, modifying its dump structure members. This race condition will result in NULL pointer dereference in kernel. So add a lock to prevent that race. Fixes: 83321d6b9872 ("[AF_KEY]: Dump SA/SP entries non-atomically") Signed-off-by: Yuejie Shi <syjcnss@gmail.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> [bwh: Backported to 3.2: - pfkey_dump() doesn't support filters - Adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* sched/loadavg: Avoid loadavg spikes caused by delayed NO_HZ accountingMatt Fleming2017-08-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we crossed a sample window while in NO_HZ we will add LOAD_FREQ to the pending sample window time on exit, setting the next update not one window into the future, but two. This situation on exiting NO_HZ is described by: this_rq->calc_load_update < jiffies < calc_load_update In this scenario, what we should be doing is: this_rq->calc_load_update = calc_load_update [ next window ] But what we actually do is: this_rq->calc_load_update = calc_load_update + LOAD_FREQ [ next+1 window ] This has the effect of delaying load average updates for potentially up to ~9seconds. This can result in huge spikes in the load average values due to per-cpu uninterruptible task counts being out of sync when accumulated across all CPUs. It's safe to update the per-cpu active count if we wake between sample windows because any load that we left in 'calc_load_idle' will have been zero'd when the idle load was folded in calc_global_load(). This issue is easy to reproduce before, commit 9d89c257dfb9 ("sched/fair: Rewrite runnable load and utilization average tracking") just by forking short-lived process pipelines built from ps(1) and grep(1) in a loop. I'm unable to reproduce the spikes after that commit, but the bug still seems to be present from code review. Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Mike Galbraith <umgwanakikbuti@gmail.com> Cc: Morten Rasmussen <morten.rasmussen@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Guittot <vincent.guittot@linaro.org> Fixes: commit 5167e8d ("sched/nohz: Rewrite and fix load-avg computation -- again") Link: http://lkml.kernel.org/r/20170217120731.11868-2-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org> [@nathanchance: Adjust for 3.10] Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
* Revert "BACKPORT: ALSA: timer: Fix race between read and ioctl"Siqi Lin2017-08-311-4/+0
| | | | | | | | This reverts commit 59a480f39844a9a51ac143e8171580171b613b22. Bug: 62201221 Change-Id: Id79733dc0bd55ec55edb8b6b4884fd47711f9838 Signed-off-by: Siqi Lin <siqilin@google.com>
* sched/topology: Fix overlapping sched_group_maskPeter Zijlstra2017-08-311-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 73bb059f9b8a00c5e1bf2f7ca83138c05d05e600 upstream. The point of sched_group_mask is to select those CPUs from sched_group_cpus that can actually arrive at this balance domain. The current code gets it wrong, as can be readily demonstrated with a topology like: node 0 1 2 3 0: 10 20 30 20 1: 20 10 20 30 2: 30 20 10 20 3: 20 30 20 10 Where (for example) domain 1 on CPU1 ends up with a mask that includes CPU0: [] CPU1 attaching sched-domain: [] domain 0: span 0-2 level NUMA [] groups: 1 (mask: 1), 2, 0 [] domain 1: span 0-3 level NUMA [] groups: 0-2 (mask: 0-2) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072) This causes sched_balance_cpu() to compute the wrong CPU and consequently should_we_balance() will terminate early resulting in missed load-balance opportunities. The fixed topology looks like: [] CPU1 attaching sched-domain: [] domain 0: span 0-2 level NUMA [] groups: 1 (mask: 1), 2, 0 [] domain 1: span 0-3 level NUMA [] groups: 0-2 (mask: 1) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072) (note: this relies on OVERLAP domains to always have children, this is true because the regular topology domains are still here -- this is before degenerate trimming) Debugged-by: Lauro Ramos Venancio <lvenanci@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Fixes: e3589f6c81e4 ("sched: Allow for overlapping sched_domain spans") Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* sched/topology: Optimize build_group_mask()Lauro Ramos Venancio2017-08-311-2/+2
| | | | | | | | | | | | | | | | | | | | commit f32d782e31bf079f600dcec126ed117b0577e85c upstream. The group mask is always used in intersection with the group CPUs. So, when building the group mask, we don't have to care about CPUs that are not part of the group. Signed-off-by: Lauro Ramos Venancio <lvenanci@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: lwang@redhat.com Cc: riel@redhat.com Link: http://lkml.kernel.org/r/1492717903-5195-2-git-send-email-lvenanci@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* vt: fix unchecked __put_user() in tioclinux ioctlsAdam Borowski2017-08-311-3/+3
| | | | | | | | | | | | | | | | Only read access is checked before this call. Actually, at the moment this is not an issue, as every in-tree arch does the same manual checks for VERIFY_READ vs VERIFY_WRITE, relying on the MMU to tell them apart, but this wasn't the case in the past and may happen again on some odd arch in the future. If anyone cares about 3.7 and earlier, this is a security hole (untested) on real 80386 CPUs. Signed-off-by: Adam Borowski <kilobyte@angband.pl> CC: stable@vger.kernel.org # v3.7- Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* exec: Limit arg stack to at most 75% of _STK_LIMKees Cook2017-08-311-5/+6
| | | | | | | | To avoid pathological stack usage or the need to special-case setuid execs, just limit all arg stack usage to at most 75% of _STK_LIM (6MB). Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ./Makefile: tell gcc optimizer to never introduce new data racesJiri Kosina2017-08-311-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have been chasing a memory corruption bug, which turned out to be caused by very old gcc (4.3.4), which happily turned conditional load into a non-conditional one, and that broke correctness (the condition was met only if lock was held) and corrupted memory. This particular problem with that particular code did not happen when never gccs were used. I've brought this up with our gcc folks, as I wanted to make sure that this can't really happen again, and it turns out it actually can. Quoting Martin Jambor <mjambor@suse.cz>: "More current GCCs are more careful when it comes to replacing a conditional load with a non-conditional one, most notably they check that a store happens in each iteration of _a_ loop but they assume loops are executed. They also perform a simple check whether the store cannot trap which currently passes only for non-const variables. A simple testcase demonstrating it on an x86_64 is for example the following: $ cat cond_store.c int g_1 = 1; int g_2[1024] __attribute__((section ("safe_section"), aligned (4096))); int c = 4; int __attribute__ ((noinline)) foo (void) { int l; for (l = 0; (l != 4); l++) { if (g_1) return l; for (g_2[0] = 0; (g_2[0] >= 26); ++g_2[0]) ; } return 2; } int main (int argc, char* argv[]) { if (mprotect (g_2, sizeof(g_2), PROT_READ) == -1) { int e = errno; error (e, e, "mprotect error %i", e); } foo (); __builtin_printf("OK\n"); return 0; } /* EOF */ $ ~/gcc/trunk/inst/bin/gcc cond_store.c -O2 --param allow-store-data-races=0 $ ./a.out OK $ ~/gcc/trunk/inst/bin/gcc cond_store.c -O2 --param allow-store-data-races=1 $ ./a.out Segmentation fault The testcase fails the same at least with 4.9, 4.8 and 4.7. Therefore I would suggest building kernels with this parameter set to zero. I also agree with Jikos that the default should be changed for -O2. I have run most of the SPEC 2k6 CPU benchmarks (gamess and dealII failed, at -O2, not sure why) compiled with and without this option and did not see any real difference between respective run-times" Hopefully the default will be changed in newer gccs, but let's force it for kernel builds so that we are on a safe side even when older gcc are used. The code in question was out-of-tree printk-in-NMI (yeah, surprise suprise, once again) patch written by Petr Mladek, let me quote his comment from our internal bugzilla: "I have spent few days investigating inconsistent state of kernel ring buffer. It went out that it was caused by speculative store generated by gcc-4.3.4. The problem is in assembly generated for make_free_space(). The functions is called the following way: + vprintk_emit(); + log = MAIN_LOG; // with logbuf_lock or log = NMI_LOG; // with nmi_logbuf_lock cont_add(log, ...); + cont_flush(log, ...); + log_store(log, ...); + log_make_free_space(log, ...); If called with log = NMI_LOG then only nmi_log_* global variables are safe to modify but the generated code does store also into (main_)log_* global variables: <log_make_free_space>: 55 push %rbp 89 f6 mov %esi,%esi 48 8b 05 03 99 51 01 mov 0x1519903(%rip),%rax # ffffffff82620868 <nmi_log_next_id> 44 8b 1d ec 98 51 01 mov 0x15198ec(%rip),%r11d # ffffffff82620858 <log_next_idx> 8b 35 36 60 14 01 mov 0x1146036(%rip),%esi # ffffffff8224cfa8 <log_buf_len> 44 8b 35 33 60 14 01 mov 0x1146033(%rip),%r14d # ffffffff8224cfac <nmi_log_buf_len> 4c 8b 2d d0 98 51 01 mov 0x15198d0(%rip),%r13 # ffffffff82620850 <log_next_seq> 4c 8b 25 11 61 14 01 mov 0x1146111(%rip),%r12 # ffffffff8224d098 <log_buf> 49 89 c2 mov %rax,%r10 48 21 c2 and %rax,%rdx 48 8b 1d 0c 99 55 01 mov 0x155990c(%rip),%rbx # ffffffff826608a0 <nmi_log_buf> 49 c1 ea 20 shr $0x20,%r10 48 89 55 d0 mov %rdx,-0x30(%rbp) 44 29 de sub %r11d,%esi 45 29 d6 sub %r10d,%r14d 4c 8b 0d 97 98 51 01 mov 0x1519897(%rip),%r9 # ffffffff82620840 <log_first_seq> eb 7e jmp ffffffff81107029 <log_make_free_space+0xe9> [...] 85 ff test %edi,%edi # edi = 1 for NMI_LOG 4c 89 e8 mov %r13,%rax 4c 89 ca mov %r9,%rdx 74 0a je ffffffff8110703d <log_make_free_space+0xfd> 8b 15 27 98 51 01 mov 0x1519827(%rip),%edx # ffffffff82620860 <nmi_log_first_id> 48 8b 45 d0 mov -0x30(%rbp),%rax 48 39 c2 cmp %rax,%rdx # end of loop 0f 84 da 00 00 00 je ffffffff81107120 <log_make_free_space+0x1e0> [...] 85 ff test %edi,%edi # edi = 1 for NMI_LOG 4c 89 0d 17 97 51 01 mov %r9,0x1519717(%rip) # ffffffff82620840 <log_first_seq> ^^^^^^^^^^^^^^^^^^^^^^^^^^ KABOOOM 74 35 je ffffffff81107160 <log_make_free_space+0x220> It stores log_first_seq when edi == NMI_LOG. This instructions are used also when edi == MAIN_LOG but the store is done speculatively before the condition is decided. It is unsafe because we do not have "logbuf_lock" in NMI context and some other process migh modify "log_first_seq" in parallel" I believe that the best course of action is both - building kernel (and anything multi-threaded, I guess) with that optimization turned off - persuade gcc folks to change the default for future releases Signed-off-by: Jiri Kosina <jkosina@suse.cz> Cc: Martin Jambor <mjambor@suse.cz> Cc: Petr Mladek <pmladek@suse.cz> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Marek Polacek <polacek@redhat.com> Cc: Jakub Jelinek <jakub@redhat.com> Cc: Steven Noonan <steven@uplinklabs.net> Cc: Richard Biener <richard.guenther@gmail.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alarmtimer: don't rate limit one-shot timersGreg Hackmann2017-08-311-1/+2
| | | | | | | | | | | | | | | Commit ff86bf0c65f1 ("alarmtimer: Rate limit periodic intervals") sets a minimum bound on the alarm timer interval. This minimum bound shouldn't be applied if the interval is 0. Otherwise, one-shot timers will be converted into periodic ones. Fixes: ff86bf0c65f1 ("alarmtimer: Rate limit periodic intervals") Reported-by: Ben Fennema <fennema@google.com> Signed-off-by: Greg Hackmann <ghackmann@google.com> Cc: stable@vger.kernel.org Cc: John Stultz <john.stultz@linaro.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* alarmtimer: Rate limit periodic intervalsThomas Gleixner2017-08-311-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ff86bf0c65f14346bf2440534f9ba5ac232c39a0 upstream. The alarmtimer code has another source of potentially rearming itself too fast. Interval timers with a very samll interval have a similar CPU hog effect as the previously fixed overflow issue. The reason is that alarmtimers do not implement the normal protection against this kind of problem which the other posix timer use: timer expires -> queue signal -> deliver signal -> rearm timer This scheme brings the rearming under scheduler control and prevents permanently firing timers which hog the CPU. Bringing this scheme to the alarm timer code is a major overhaul because it lacks all the necessary mechanisms completely. So for a quick fix limit the interval to one jiffie. This is not problematic in practice as alarmtimers are usually backed by an RTC for suspend which have 1 second resolution. It could be therefor argued that the resolution of this clock should be set to 1 second in general, but that's outside the scope of this fix. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kostya Serebryany <kcc@google.com> Cc: syzkaller <syzkaller@googlegroups.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Dmitry Vyukov <dvyukov@google.com> Link: http://lkml.kernel.org/r/20170530211655.896767100@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* PM / Domains: Fix unsafe iteration over modified list of device linksKrzysztof Kozlowski2017-08-311-2/+2
| | | | | | | | | | | | | | commit c6e83cac3eda5f7dd32ee1453df2f7abb5c6cd46 upstream. pm_genpd_remove_subdomain() iterates over domain's master_links list and removes matching element thus it has to use safe version of list iteration. Fixes: f721889ff65a ("PM / Domains: Support for generic I/O PM domains (v8)") Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* lib/bsearch.c: micro-optimize pivot position calculationSergey Senozhatsky2017-08-301-10/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a slightly faster way (in terms of the number of instructions being used) to calculate the position of a middle element, preserving integer overflow safeness. ./scripts/bloat-o-meter lib/bsearch.o.old lib/bsearch.o.new add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-24 (-24) function old new delta bsearch 122 98 -24 TEST INT array of size 100001, elements [0..100000]. gcc 7.1, Os, x86_64. a) bsearch() of existing key "100001 - 2": BASE ==== $ perf stat ./a.out Performance counter stats for './a.out': 619.445196 task-clock:u (msec) # 0.999 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 133 page-faults:u # 0.215 K/sec 1,949,517,279 cycles:u # 3.147 GHz (83.06%) 181,017,938 stalled-cycles-frontend:u # 9.29% frontend cycles idle (83.05%) 82,959,265 stalled-cycles-backend:u # 4.26% backend cycles idle (67.02%) 4,355,706,383 instructions:u # 2.23 insn per cycle # 0.04 stalled cycles per insn (83.54%) 1,051,539,242 branches:u # 1697.550 M/sec (83.54%) 15,263,381 branch-misses:u # 1.45% of all branches (83.43%) 0.620082548 seconds time elapsed PATCHED ======= $ perf stat ./a.out Performance counter stats for './a.out': 475.097316 task-clock:u (msec) # 0.999 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 135 page-faults:u # 0.284 K/sec 1,487,467,717 cycles:u # 3.131 GHz (82.95%) 186,537,162 stalled-cycles-frontend:u # 12.54% frontend cycles idle (82.93%) 28,797,869 stalled-cycles-backend:u # 1.94% backend cycles idle (67.10%) 3,807,564,203 instructions:u # 2.56 insn per cycle # 0.05 stalled cycles per insn (83.57%) 1,049,344,291 branches:u # 2208.693 M/sec (83.60%) 5,485 branch-misses:u # 0.00% of all branches (83.58%) 0.475760235 seconds time elapsed b) bsearch() of un-existing key "100001 + 2": BASE ==== $ perf stat ./a.out Performance counter stats for './a.out': 499.244480 task-clock:u (msec) # 0.999 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 132 page-faults:u # 0.264 K/sec 1,571,194,855 cycles:u # 3.147 GHz (83.18%) 13,450,980 stalled-cycles-frontend:u # 0.86% frontend cycles idle (83.18%) 21,256,072 stalled-cycles-backend:u # 1.35% backend cycles idle (66.78%) 4,171,197,909 instructions:u # 2.65 insn per cycle # 0.01 stalled cycles per insn (83.68%) 1,009,175,281 branches:u # 2021.405 M/sec (83.79%) 3,122 branch-misses:u # 0.00% of all branches (83.37%) 0.499871144 seconds time elapsed PATCHED ======= $ perf stat ./a.out Performance counter stats for './a.out': 399.023499 task-clock:u (msec) # 0.998 CPUs utilized 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 134 page-faults:u # 0.336 K/sec 1,245,793,991 cycles:u # 3.122 GHz (83.39%) 11,529,273 stalled-cycles-frontend:u # 0.93% frontend cycles idle (83.46%) 12,116,311 stalled-cycles-backend:u # 0.97% backend cycles idle (66.92%) 3,679,710,005 instructions:u # 2.95 insn per cycle # 0.00 stalled cycles per insn (83.47%) 1,009,792,625 branches:u # 2530.660 M/sec (83.46%) 2,590 branch-misses:u # 0.00% of all branches (83.12%) 0.399733539 seconds time elapsed Link: http://lkml.kernel.org/r/20170607150457.5905-1-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Pranav Vashi <neobuddy89@gmail.com>
* Revert "dm ioctl: prevent stack leak in dm ioctl call"Jonathan Solnit2017-08-301-1/+1
| | | | | | | | This reverts commit 1d5b6ba1bfe0ce28eca6fa79a74d0088e706e63e. Bug: 35644370 Change-Id: I0880d5f11cd22547934a13b7aa564a4102b95aa9 Signed-off-by: Jonathan Solnit <jsolnit@google.com>
* Revert "UPSTREAM: ALSA: timer: Fix missing queue indices reset at ↵Jonathan Solnit2017-08-301-1/+0
| | | | | | | | | | SNDRV_TIMER_IOCTL_SELECT" This reverts commit 31aaef072e5f8ce4fd12243be0695252bc736cd8. Bug: 62201221 Change-Id: Iff57c945dcffcdd5632aeb65992ae7a5aca98186 Signed-off-by: Jonathan Solnit <jsolnit@google.com>
* net: core: neighbour: Change the print format for addressesNaveen Ramaraj2017-08-241-2/+2
| | | | | | | | | | | | | | | | Print format %p displays the kernel address while bypassing the kptr_restrict sysctl settings. Change the print format for addresses from %p to %pK. If kptr_restrict is enabled, addresses are printed as zeroes. To view the actual addresses, disable kptr_restrict by - echo 0 > /proc/sys/kernel/kptr_restrict Bug: 37340687, 37341313 CRs-Fixed: 987041 Change-Id: I2eb33c63168ab26818dfdb3e11315f2ce8f24fa5 Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: Naveen Ramaraj <nramaraj@codeaurora.org>
* UPSTREAM: crypto: crypto_memneq - add equality testing of memory regions w/o ↵James Yonan2017-08-248-14/+174
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | timing leaks (cherry pick from commit 5fd53819a37e243f368376d70260873448b83df8 in 3.10.y) commit 6bf37e5aa90f18baf5acf4874bca505dd667c37f upstream. When comparing MAC hashes, AEAD authentication tags, or other hash values in the context of authentication or integrity checking, it is important not to leak timing information to a potential attacker, i.e. when communication happens over a network. Bytewise memory comparisons (such as memcmp) are usually optimized so that they return a nonzero value as soon as a mismatch is found. E.g, on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch and up to ~850 cyc for a full match (cold). This early-return behavior can leak timing information as a side channel, allowing an attacker to iteratively guess the correct result. This patch adds a new method crypto_memneq ("memory not equal to each other") to the crypto API that compares memory areas of the same length in roughly "constant time" (cache misses could change the timing, but since they don't reveal information about the content of the strings being compared, they are effectively benign). Iow, best and worst case behaviour take the same amount of time to complete (in contrast to memcmp). Note that crypto_memneq (unlike memcmp) can only be used to test for equality or inequality, NOT for lexicographical order. This, however, is not an issue for its use-cases within the crypto API. We tried to locate all of the places in the crypto API where memcmp was being used for authentication or integrity checking, and convert them over to crypto_memneq. crypto_memneq is declared noinline, placed in its own source file, and compiled with optimizations that might increase code size disabled ("Os") because a smart compiler (or LTO) might notice that the return value is always compared against zero/nonzero, and might then reintroduce the same early-return optimization that we are trying to avoid. Using #pragma or __attribute__ optimization annotations of the code for disabling optimization was avoided as it seems to be considered broken or unmaintained for long time in GCC [1]. Therefore, we work around that by specifying the compile flag for memneq.o directly in the Makefile. We found that this seems to be most appropriate. As we use ("Os"), this patch also provides a loop-free "fast-path" for frequently used 16 byte digests. Similarly to kernel library string functions, leave an option for future even further optimized architecture specific assembler implementations. This was a joint work of James Yonan and Daniel Borkmann. Also thanks for feedback from Florian Weimer on this and earlier proposals [2]. [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html [2] https://lkml.org/lkml/2013/2/10/131 Signed-off-by: James Yonan <james@openvpn.net> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Florian Weimer <fw@deneb.enyo.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
* mtk: gps: upstream changesMister Oyster2017-08-202-4/+46
|
* Merge mediatek security patchesfire8552017-08-206-30/+69
| | | | | | | * Revert : Merge mediatek security patches (14326e25d3fc3b4d780c2d9d2eebbe3231ad5376) * Reapply : 14326e25d3fc3b4d780c2d9d2eebbe3231ad5376 Signed-off-by: Mister Oyster <oysterized@gmail.com>
* xfrm: Don't use sk_family for socket policy lookupsSteffen Klassert2017-08-201-5/+4
| | | | | | | | | | | | | | On IPv4-mapped IPv6 addresses sk_family is AF_INET6, but the flow informations are created based on AF_INET. So the routing set up 'struct flowi4' but we try to access 'struct flowi6' what leads to an out of bounds access. Fix this by using the family we get with the dst_entry, like we do it for the standard policy lookup. Reported-by: Dmitry Vyukov <dvyukov@google.com> Tested-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Joe Maples <joe@frap129.org>
* net: skb_needs_check() accepts CHECKSUM_NONE for txEric Dumazet2017-08-201-3/+4
| | | | | | | | | | | | | | | My recent change missed fact that UFO would perform a complete UDP checksum before segmenting in frags. In this case skb->ip_summed is set to CHECKSUM_NONE. We need to add this valid case to skb_needs_check() Fixes: b2504a5dbef3 ("net: reduce skb_warn_bad_offload() noise") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Joe Maples <joe@frap129.org>
* net: sctp: fix race for one-to-many sockets in sendmsg's auto associateDaniel Borkmann2017-08-201-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I.e. one-to-many sockets in SCTP are not required to explicitly call into connect(2) or sctp_connectx(2) prior to data exchange. Instead, they can directly invoke sendmsg(2) and the SCTP stack will automatically trigger connection establishment through 4WHS via sctp_primitive_ASSOCIATE(). However, this in its current implementation is racy: INIT is being sent out immediately (as it cannot be bundled anyway) and the rest of the DATA chunks are queued up for later xmit when connection is established, meaning sendmsg(2) will return successfully. This behaviour can result in an undesired side-effect that the kernel made the application think the data has already been transmitted, although none of it has actually left the machine, worst case even after close(2)'ing the socket. Instead, when the association from client side has been shut down e.g. first gracefully through SCTP_EOF and then close(2), the client could afterwards still receive the server's INIT_ACK due to a connection with higher latency. This INIT_ACK is then considered out of the blue and hence responded with ABORT as there was no alive assoc found anymore. This can be easily reproduced f.e. with sctp_test application from lksctp. One way to fix this race is to wait for the handshake to actually complete. The fix defers waiting after sctp_primitive_ASSOCIATE() and sctp_primitive_SEND() succeeded, so that DATA chunks cooked up from sctp_sendmsg() have already been placed into the output queue through the side-effect interpreter, and therefore can then be bundeled together with COOKIE_ECHO control chunks. strace from example application (shortened): socket(PF_INET, SOCK_SEQPACKET, IPPROTO_SCTP) = 3 sendmsg(3, {msg_name(28)={sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.1.115")}, msg_iov(1)=[{"hello", 5}], msg_controllen=0, msg_flags=0}, 0) = 5 sendmsg(3, {msg_name(28)={sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.1.115")}, msg_iov(1)=[{"hello", 5}], msg_controllen=0, msg_flags=0}, 0) = 5 sendmsg(3, {msg_name(28)={sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.1.115")}, msg_iov(1)=[{"hello", 5}], msg_controllen=0, msg_flags=0}, 0) = 5 sendmsg(3, {msg_name(28)={sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.1.115")}, msg_iov(1)=[{"hello", 5}], msg_controllen=0, msg_flags=0}, 0) = 5 sendmsg(3, {msg_name(28)={sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.1.115")}, msg_iov(0)=[], msg_controllen=48, {cmsg_len=48, cmsg_level=0x84 /* SOL_??? */, cmsg_type=, ...}, msg_flags=0}, 0) = 0 // graceful shutdown for SOCK_SEQPACKET via SCTP_EOF close(3) = 0 tcpdump before patch (fooling the application): 22:33:36.306142 IP 192.168.1.114.41462 > 192.168.1.115.8888: sctp (1) [INIT] [init tag: 3879023686] [rwnd: 106496] [OS: 10] [MIS: 65535] [init TSN: 3139201684] 22:33:36.316619 IP 192.168.1.115.8888 > 192.168.1.114.41462: sctp (1) [INIT ACK] [init tag: 3345394793] [rwnd: 106496] [OS: 10] [MIS: 10] [init TSN: 3380109591] 22:33:36.317600 IP 192.168.1.114.41462 > 192.168.1.115.8888: sctp (1) [ABORT] tcpdump after patch: 14:28:58.884116 IP 192.168.1.114.35846 > 192.168.1.115.8888: sctp (1) [INIT] [init tag: 438593213] [rwnd: 106496] [OS: 10] [MIS: 65535] [init TSN: 3092969729] 14:28:58.888414 IP 192.168.1.115.8888 > 192.168.1.114.35846: sctp (1) [INIT ACK] [init tag: 381429855] [rwnd: 106496] [OS: 10] [MIS: 10] [init TSN: 2141904492] 14:28:58.888638 IP 192.168.1.114.35846 > 192.168.1.115.8888: sctp (1) [COOKIE ECHO] , (2) [DATA] (B)(E) [TSN: 3092969729] [...] 14:28:58.893278 IP 192.168.1.115.8888 > 192.168.1.114.35846: sctp (1) [COOKIE ACK] , (2) [SACK] [cum ack 3092969729] [a_rwnd 106491] [#gap acks 0] [#dup tsns 0] 14:28:58.893591 IP 192.168.1.114.35846 > 192.168.1.115.8888: sctp (1) [DATA] (B)(E) [TSN: 3092969730] [...] 14:28:59.096963 IP 192.168.1.115.8888 > 192.168.1.114.35846: sctp (1) [SACK] [cum ack 3092969730] [a_rwnd 106496] [#gap acks 0] [#dup tsns 0] 14:28:59.097086 IP 192.168.1.114.35846 > 192.168.1.115.8888: sctp (1) [DATA] (B)(E) [TSN: 3092969731] [...] , (2) [DATA] (B)(E) [TSN: 3092969732] [...] 14:28:59.103218 IP 192.168.1.115.8888 > 192.168.1.114.35846: sctp (1) [SACK] [cum ack 3092969732] [a_rwnd 106486] [#gap acks 0] [#dup tsns 0] 14:28:59.103330 IP 192.168.1.114.35846 > 192.168.1.115.8888: sctp (1) [SHUTDOWN] 14:28:59.107793 IP 192.168.1.115.8888 > 192.168.1.114.35846: sctp (1) [SHUTDOWN ACK] 14:28:59.107890 IP 192.168.1.114.35846 > 192.168.1.115.8888: sctp (1) [SHUTDOWN COMPLETE] Looks like this bug is from the pre-git history museum. ;) Fixes: 08707d5482df ("lksctp-2_5_31-0_5_1.patch") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Joe Maples <joe@frap129.org>
* pstore: Use dynamic spinlock initializerKees Cook2017-08-201-1/+1
| | | | | | | | | | | | The per-prz spinlock should be using the dynamic initializer so that lockdep can correctly track it. Without this, under lockdep, we get a warning at boot that the lock is in non-static memory. Fixes: 109704492ef6 ("pstore: Make spinlock per zone instead of global") Fixes: 76d5692a5803 ("pstore: Correctly initialize spinlock and flags") Signed-off-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: Joe Maples <joe@frap129.org>
* pstore: Correctly initialize spinlock and flagsKees Cook2017-08-201-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ram backend wasn't always initializing its spinlock correctly. Since it was coming from kzalloc memory, though, it was harmless on architectures that initialize unlocked spinlocks to 0 (at least x86 and ARM). This also fixes a possibly ignored flag setting too. When running under CONFIG_DEBUG_SPINLOCK, the following Oops was visible: [ 0.760836] persistent_ram: found existing buffer, size 29988, start 29988 [ 0.765112] persistent_ram: found existing buffer, size 30105, start 30105 [ 0.769435] persistent_ram: found existing buffer, size 118542, start 118542 [ 0.785960] persistent_ram: found existing buffer, size 0, start 0 [ 0.786098] persistent_ram: found existing buffer, size 0, start 0 [ 0.786131] pstore: using zlib compression [ 0.790716] BUG: spinlock bad magic on CPU#0, swapper/0/1 [ 0.790729] lock: 0xffffffc0d1ca9bb0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 [ 0.790742] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.10.0-rc2+ #913 [ 0.790747] Hardware name: Google Kevin (DT) [ 0.790750] Call trace: [ 0.790768] [<ffffff900808ae88>] dump_backtrace+0x0/0x2bc [ 0.790780] [<ffffff900808b164>] show_stack+0x20/0x28 [ 0.790794] [<ffffff9008460ee0>] dump_stack+0xa4/0xcc [ 0.790809] [<ffffff9008113cfc>] spin_dump+0xe0/0xf0 [ 0.790821] [<ffffff9008113d3c>] spin_bug+0x30/0x3c [ 0.790834] [<ffffff9008113e28>] do_raw_spin_lock+0x50/0x1b8 [ 0.790846] [<ffffff9008a2d2ec>] _raw_spin_lock_irqsave+0x54/0x6c [ 0.790862] [<ffffff90083ac3b4>] buffer_size_add+0x48/0xcc [ 0.790875] [<ffffff90083acb34>] persistent_ram_write+0x60/0x11c [ 0.790888] [<ffffff90083aab1c>] ramoops_pstore_write_buf+0xd4/0x2a4 [ 0.790900] [<ffffff90083a9d3c>] pstore_console_write+0xf0/0x134 [ 0.790912] [<ffffff900811c304>] console_unlock+0x48c/0x5e8 [ 0.790923] [<ffffff900811da18>] register_console+0x3b0/0x4d4 [ 0.790935] [<ffffff90083aa7d0>] pstore_register+0x1a8/0x234 [ 0.790947] [<ffffff90083ac250>] ramoops_probe+0x6b8/0x7d4 [ 0.790961] [<ffffff90085ca548>] platform_drv_probe+0x7c/0xd0 [ 0.790972] [<ffffff90085c76ac>] driver_probe_device+0x1b4/0x3bc [ 0.790982] [<ffffff90085c7ac8>] __device_attach_driver+0xc8/0xf4 [ 0.790996] [<ffffff90085c4bfc>] bus_for_each_drv+0xb4/0xe4 [ 0.791006] [<ffffff90085c7414>] __device_attach+0xd0/0x158 [ 0.791016] [<ffffff90085c7b18>] device_initial_probe+0x24/0x30 [ 0.791026] [<ffffff90085c648c>] bus_probe_device+0x50/0xe4 [ 0.791038] [<ffffff90085c35b8>] device_add+0x3a4/0x76c [ 0.791051] [<ffffff90087d0e84>] of_device_add+0x74/0x84 [ 0.791062] [<ffffff90087d19b8>] of_platform_device_create_pdata+0xc0/0x100 [ 0.791073] [<ffffff90087d1a2c>] of_platform_device_create+0x34/0x40 [ 0.791086] [<ffffff900903c910>] of_platform_default_populate_init+0x58/0x78 [ 0.791097] [<ffffff90080831fc>] do_one_initcall+0x88/0x160 [ 0.791109] [<ffffff90090010ac>] kernel_init_freeable+0x264/0x31c [ 0.791123] [<ffffff9008a25bd0>] kernel_init+0x18/0x11c [ 0.791133] [<ffffff9008082ec0>] ret_from_fork+0x10/0x50 [ 0.793717] console [pstore-1] enabled [ 0.797845] pstore: Registered ramoops as persistent store backend [ 0.804647] ramoops: attached 0x100000@0xf7edc000, ecc: 0/0 Fixes: 663deb47880f ("pstore: Allow prz to control need for locking") Fixes: 109704492ef6 ("pstore: Make spinlock per zone instead of global") Reported-by: Brian Norris <briannorris@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Joe Maples <joe@frap129.org>