Quantcast
Channel: Psychosomatic, Lobotomy, Saw
Viewing all 76 articles
Browse latest View live

Java Object Layout: A Tale Of Confusion

$
0
0
Buy the T-shirt
Following the twists and turns of the conversation on this thread in the Mechanical Sympathy mailing list highlights how hard it is to reason about object layout based on remembered rules. Mostly every person on the thread is right, but not completely right. Here's how it went...

Context

The thread discusses False Sharing as described here. It is pointed out that the padding is one sided and padding using inheritance is demonstrated as solution. The merits of using inheritance vs. using an array and utilizing Unsafe to access middle element (see Disruptor's Sequence) vs. using AtomicLongArray to achieve the same effect are discussed (I think the inheritance option is best, as explored here). And then confusion erupts...

What's my layout?

At this point Peter L. makes the following point:
[...]in fact the best option may be.
class Client1 {
    private long value;
    public long[] padding = new long[5];
}
What follows is a series of suggestions on what the layout of this class may be.

Option 1: My recollections

I was too lazy to check and from memory penned the following:
[...] the ordering of the fields may end up shifting the reference (if it's 32bit or CompressedOop) next to the header to fill the pad required for the value field. The end layout may well be:
12b header
4b padding(oop)
8b value

Option 2: Simon's recollections

Simon B. replied:
[...] I thought Hotspot was laying out longs before
references, and that the object header was 8 Bytes.
So I would expect Client1 to be laid out in this way:
8B header
8B value
4B padding (oop)
[...] Am I out of date in object layout and header size ?

Option 3: TM Jee's doubts

Mr. Jee slightly changed the class and continued:
for:
class Client1 {
    private long value;
    public long[] padding = new long[5]
    public Object[] o = new Object[1];
}
the memory layout should be something like
12b header (or is it 16b)
8b value
4b for the long[] (its just the reference which is 4b for compressed and 8b if not)
4b for the Object[] (again it's just the reference)
Is this right so far?
To which  Peter L. wisely replied:
Yes. But as you recognise the sizes of the header and sizes of references are not known until runtime.

Option 4: Check...

So I used JOL to check. And as it turns out we are all somewhat right and somewhat wrong...
I'm right for compressed oops (the default for 64bit):
Running 64-bit HotSpot VM.
Using compressed references with 3-bit shift.
Client1 object internals:
 OFFSET  SIZE     TYPE DESCRIPTION
      0     4          (object header)
      4     4          (object header)
      8     4          (object header)
     12     4   long[] Client1.padding
     16     8     long Client1.value
     24     4 Object[] Client1.o
     28     4          (loss due to the next object alignment)

The header is 12b and the array reference is shifted up to save on space. But my casual assumption 32bit JVM layout will be the same is wrong.

Simon is right that the header is 8b (but only for 32bit JVMs) and that references will go at the end (for both 32bit and 64bit, but not with compressed oops):
Running 32-bit HotSpot VM.
Client1 object internals:
 OFFSET  SIZE     TYPE DESCRIPTION
      0     4          (object header)
      4     4          (object header)
      8     8     long Client1.value
     16     4   long[] Client1.padding
     20     4 Object[] Client1.o

And finally with 64bit Mr. Jee is right too:
Running 64-bit HotSpot VM.
Client1 object internals:
 OFFSET  SIZE     TYPE DESCRIPTION
      0     4          (object header)
      4     4          (object header)
      8     4          (object header)
     12     4          (object header)
     16     8     long Client1.value
     24     8   long[] Client1.padding
     32     8 Object[] Client1.o

And Peter is entirely right to point out the runtime is the crucial variable in this equation.

Lesson?

If you catch yourself wondering about object layout:
  1. Use JOL to check, it's better than memorizing rules
  2. Remember that 32/64/64+Oops are different for Hotspot, and other JVMs may have different layouts altogether
  3. Read another post about java memory layout


    An extended Queue interface

    $
    0
    0
    In my work on JCTools I have implemented a fair number of concurrent access queues. The Queue interface is part of the java.util package and offers a larger API surface area than I found core to concurrent message passing on the one hand, and still missing others. I'm hoping to solicit some discussion on some new methods, and see if I can be convinced to implement those I decided to avoid thus far.

    Inter-thread Message Passing vs. Generic Collection

    The queue data structure traditionally has an offer/push/enqueue method for writing into the queue and a poll/pop/dequeue method for removing elements. This should be enough for anyone, right?
    Turns out some other useful methods come to mind when using a queue:
    • peek: looking, not touching
    • size: for monitoring and heuristics and so on
    • isEmpty: a potentially more efficient assertion than size == 0
    But I would argue that implementing the full scope of the java.util.Collection methods is slightly redundant. Why is it good to have offer/poll also exposed as add/remove/element?
    I have chosen to avoid implementing iterators for queues in JCTools, they seem to introduce a large complexity and I'm not convinced users out there need an iterator for their concurrent queue. The duplicate collection methods I get from extending AbstractQueue for free are alright though I don't see why anyone would actually use them.
    So lots of methods I don't want, but also some methods missing, in particular for concurrent queue semantics. In particular I have come to the conclusion the following are both practical to implement and helpful to users:
    • weakOffer/Poll/Peek: the same as the 'strong' methods but without the requirement that allowing for spurious negative return value. This is particularly useful for queues where we are able to detect the inability to claim the next slot independently of the queue being full or empty. The 'weak' prefix is different from the Atomic* classes definition of 'weak' as it has no ordering implications, but I've been unable to come up with a better name(suggestions are most welcome).
    • drain(receiver)/fill(provider): this is attractive for cases where a receiver/provider function exists such that calls to it are guaranteed to succeed (e.g. receiver could just throw elements away, producer might return a constant). There are some small savings to be made here around the function calling loop, but for some these small savings might prove significant. A perpetual version of these methods could also take some sort of backoff strategy and spin when hitting an empty/full condition (similar to the Disruptor's BatchProcessor workflow).
    • batchPoll/Offer: this is attractive for processing known number of elements out of/into a queue. This is particularly profitable when multiple consumer/producers contend as the granularity of contention is reduced. It is also potentially damaging as batches can introduce large 'bubbles' into the queue and increase the unfairness. These methods could support a receiver/provider interface or use an array (enables batch array copies, guarantees quick release of a 'bubble').
    • offeredElementsCount/polledElementsCount: this is purely for reporting and would tend to be a long reflecting the producer/consumer counter. For linked queues this adds a new overhead, so while this is a clearly useful feature I'm reluctant to make it a compulsory part of the new interface.

    I plan to implement the above for JCTools in the near future. What do you think? Please opine in the comments or via the projects suggestion box.
    Thanks :)

    Expanding The Queue interface: Relaxed Queue Access

    $
    0
    0

    Continuing from previous post on the expansion of the Queue interface to support new ways of interacting with queues I have gone ahead and implemented relaxedOffer/Poll/Peek for the JCTools queues. This was pretty easy as the original algorithms all required a bit of mangling to support the strong semantic and relaxing it made life easier. Implementing batch/continuous interactions was a bit more involved, but the results were interestingly rewarding. I will save discussing the drain/fill results for a follow up post.
    The new interface is here, feedback is welcome.
    TL;DR:

    • Throughput is improved in all cases (~5-15% increase)
    • Cost of failure (when offer/poll fail to return/add a value) is dramatically reduced
    • Latency for bursts is improved (as much as 40% reduction)

    Relaxed Access API

    This ended up a very small API change, most of the difficulty here being choice of name. I went with the 'relaxed' prefix as suggested by J.P.Bempel (as opposed to the 'weak' prefix considered earlier):

    Relaxed vs. Non-Relaxed Example

    The 'relaxation' of the queue full/empty reporting in these methods allows us to remove an extra load of the 'opposite' index and avoiding waiting around for delayed visibility. Here's how MPSCArrayQueue::poll was changed to become 'relaxed':

    So now we can drop lines 11-15, it doesn't look like much. We wouldn't really expect this gap in offer (see offer for details) to happen very often, so the overhead can typically amount to an extra load of the producerIndex and a predictable branch.
    Also, since poll on an MPSC is actually cheaper than the offer anyhow, aren't we optimizing a non-issue? Could we not be actually making throughput worse by making the consumer more aggressive?

    Throughput Impact Of Relaxation

    To test the impact of this change I constructed a JMH benchmark that measures throughput when both consumer and producer spin to offer/poll as fast as they can (see the code here). I ran the benchmark on a Haswell machine(i7-4770 CPU@3.40GHz/Ubuntu/OracleJVM8u60). The benchmarks were pinned to run cross core (no 2 threads share a physical core, i.e. no hyperthreading). Benchmarks were run with 5 forks, 10 warmup iterations (1 second), 10 measured iterations (1 second). The score reports operations per microsecond, or millions of operation per second if you prefer. The throughput is the number of pollsMade, but the other figures are interesting as well.
    These results are with a single producer and consumer:
     Type |     1P1C     | Normal± Err  | Relax ± Err  |
     Spsc | offersFailed |   0.0 ±  0.1 |   0.1 ±  0.1 |
     Spsc | offersMade   | 439.1 ±  4.1 | 432.1 ± 10.0 |
     Spsc | pollsFailed  |   0.2 ±  0.5 |   0.1 ±  0.2 |
     Spsc | pollsMade    | 438.9 ±  4.1 | 431.9 ± 10.0 |
     Mpsc | offersFailed |   0.0 ±  0.0 |   0.0 ±  0.0 |
     Mpsc | offersMade   |  22.7 ±  0.6 |  31.9 ±  0.8 |
     Mpsc | pollsFailed  |   0.1 ±  0.1 |   6.6 ±  0.9 |
     Mpsc | pollsMade    |  22.8 ±  0.6 |  31.9 ±  0.8 |
     Spmc | offersFailed |   0.2 ±  0.2 |  81.0 ±  5.4 |
     Spmc | offersMade   |  23.5 ±  0.6 |  26.8 ±  0.6 |
     Spmc | pollsFailed  |   0.1 ±  0.1 |   0.2 ±  0.3 |
     Spmc | pollsMade    |  23.2 ±  0.6 |  26.6 ±  0.6 |
     Mpmc | offersFailed |   0.0 ±  0.0 |   0.0 ±  0.1 |
     Mpmc | offersMade   |  71.4 ±  5.1 |  71.7 ±  4.8 |
     Mpmc | pollsFailed  |   0.0 ±  0.1 |   0.3 ±  0.9 |
     Mpmc | pollsMade    |  71.4 ±  5.1 |  71.7 ±  4.8 |

    • The SPSC case serves as a baseline, and I expected results to be the same. There's a 2% difference which is close to the error reported. Maybe the inlining depth impacted the performance slightly, or perhaps the benchmark is not as stable as I'd like.
    • The MPSC case shows 50% improvement. This is pretty awesome. We also notice the consumer now fails more. This is indicative to the reduced cost of failure to find elements in the queue. The cache miss on the producerIndex was slowing down both the consumer and the producer. This phenomena has been observed in other post, but my best attempt at an explanation is here. The bottom line being that the mutator suffers from losing the exclusive state of the cache line in his own cache, while the reader gets hit with a read miss (going to LLC as the threads are on separate cores).
    • The SPMC case shows 15% improvement on throughput. The cost of failing to offer has gone down dramatically.
    • The MPMC case shows no change, but does show great throughput. This could be down to the very different algorithm in play or just because the MPMC queue is well balanced and thus behaves very well in a balanced use case as above. This may lead you to believe MPMC is a better choice than MPSC, but as the latency results and contended throughput results show this is an anomaly of the benchmark load. Because MPSC presents a much cheaper poll than offer the queue is always empty, making the contention on the queue buffer worse than it would be for MPMC where offer and poll are of similar costs.
    Using the command line controls over thread groups sizes (-tg) I ran a few more cases. Did I mention I love JMH?
    Here's MPMC and MPSC with 3 producers and 1 consumer:
     Type |     3P1C     | Normal± Err  | Relax ± Err  |
     Mpsc | offersFailed |   0.0 ±  0.0 |   0.1 ±  0.6 | 
     Mpsc | offersMade   |  12.7 ±  0.1 |  13.4 ±  0.1 | 
     Mpsc | pollsFailed  |   4.1 ±  0.5 | 115.3 ±  4.9 | 
     Mpsc | pollsMade    |  12.7 ±  0.1 |  13.4 ±  0.1 | 
     Mpmc | offersFailed |   0.0 ±  0.0 |   0.0 ±  0.0 | 
     Mpmc | offersMade   |  11.1 ±  0.0 |  11.7 ±  0.1 | 
     Mpmc | pollsFailed  |   0.3 ±  0.1 |  60.4 ± 10.0 | 
     Mpmc | pollsMade    |  11.1 ±  0.0 |  11.7 ±  0.1 |
    • MPSC case shows slight improvement to throughput of roughly 5%, but the improvement to cost of failure is very visible here.
    • MPMC case improves by 5% and the improved cost of failure is similarly visible.
    Here's MPMC and SPMC with 1 producers and 3 consumer:
     Type |     1P3C     | Normal± Err  | Relax ± Err  |
     Spmc | offersFailed |   1.5 ±  0.2 | 125.9 ±  4.6 |
     Spmc | offersMade   |  13.4 ±  0.1 |  14.0 ±  0.0 |
     Spmc | pollsFailed  |   0.0 ±  0.0 |   0.0 ±  0.0 |
     Spmc | pollsMade    |  13.1 ±  0.1 |  13.7 ±  0.0 |
     Mpmc | offersFailed |  23.9 ±  0.8 | 124.1 ±  3.2 |
     Mpmc | offersMade   |  11.0 ±  0.0 |  12.1 ±  0.0 |
     Mpmc | pollsFailed  |   0.0 ±  0.2 |   1.0 ±  1.6 |
     Mpmc | pollsMade    |  10.7 ±  0.0 |  11.9 ±  0.0 |
    • SPMC case shows  5% improvement and great improvement to failure performance.
    • MPMC case improves by 11% and the improved cost of failure is similarly visible.
    Here's MPMC only, with 2 producers and 2 consumers:
     Type |    2P2C      | Normal± Err  | Relax ± Err  |
     Mpmc | offersFailed |   1.4 ±  0.8 |   0.7 ±  0.6 |
     Mpmc | offersMade   |  14.8 ±  0.8 |  16.0 ±  0.6 |
     Mpmc | pollsFailed  |   0.0 ±  0.0 |   0.4 ±  0.5 |
     Mpmc | pollsMade    |  14.7 ±  0.8 |  15.9 ±  0.6 |
    • The improvement to throughput is a nice 8%, but the error is not that far from it so I'm not too confident without lots more runs. This is a similarly symmetric load to the 1P1C and the balance is showing through.
    I could go on to higher core counts etc, but I'm actually feeling the win here is pretty conclusive:
    • Throughput is generally improved
    • Failure to make progress is significantly cheaper
    Why is this failure mode important? This is particularly important when you have a consumer/producer which is able to make progress in the face of such a failure. A common example of this pattern is a Selector thread which also has an incoming queue of commands to execute, when there's nothing in the queue we would like to go back to selecting on the selector.


    Latency impact of Relaxation

    Is this going to help us in the case of burst handling latency? There's a benchmark for that too! Or rather there's an estimate of burst processing cost. Code is here. The general idea being to send a burst of messages from the producer, then wait for completion notice from the consumer.
    Here goes:
     Size | Type | Normal±Err | Relax± Err |
        1 | Spsc |  126 ±   2 |  126 ±   1 |
        1 | Mpsc |  144 ±   2 |  140 ±   1 |
        1 | Spmc |  187 ±   2 |  189 ±   2 |
        1 | Mpmc |  190 ±   8 |  203 ±   2 |
       10 | Spsc |  193 ±   2 |  192 ±   3 |
       10 | Mpsc |  611 ±  22 |  429 ±   7 |
       10 | Spmc |  309 ±   3 |  307 ±   1 |
       10 | Mpmc |  807 ±  19 |  677 ±  15 |
      100 | Spsc |  530 ±   7 |  528 ±   4 |
      100 | Mpsc | 5243 ± 158 | 3023 ± 107 |
      100 | Spmc | 1240 ±   7 | 1250 ±   6 |
      100 | Mpmc | 4684 ± 382 | 3665 ± 111 |

    • Same as before, SPSC serves as a sanity check/baseline. No change is good.
    • SPMC/MPSC results are pretty similar when sending a single message. MPMC seems slightly regressed.
    • When sending 10/100 messages in a burst we see significant benefits to using the relaxed methods, in particular for the MPSC and MPMC cases.


    Summary

    Relaxing is good for you, deep down you always knew it. Turns out it ain't half bad for your queues either :-). The new API will be part of the next version of JCTools, coming out as soon as I can work through my Maven mojo phobia.

    Thanks J.P. Bempel and Phil Aston for their kind review, remaining errors are a result of me refusing to listen to them persnickety bastards. 

    Safepoints: Meaning, Side Effects and Overheads

    $
    0
    0
    I've been giving a couple of talks in the last year about profiling and about the JVM runtime/execution and in both I found myself coming up against the topic of Safepoints. Most people are blissfully ignorant of the existence of safepoints and of a room full of people I would typically find 1 or 2 developers who would have any familiarity with the term. This is not surprising, since safepoints are not a part of the Java language spec. But as safepoints are a part of every JVM implementation (that I know of) and play an important role, here's my feeble attempt at an overview.

    What's a Safepoint?

    I was recently asked: "Are safepoints like safe words?". The short answer is "not really", but since I like the metaphor I'll run with it:
    Imagine if you will a JVM full of mutator threads, all busy, sweating, mutating the heap. Some of them have <gasp> shared mutable state. They're mutating each others state, concurrently, like animals. Some stand in corners mutating their own state (go blind they will). Suddenly a neon sign flashes the word PINEAPPLES. One by one the mutators stop their rampant heap romping and wait, sweat dripping. When the last mutator stops, a bunch of elves come in, empty the ashtrays, fill up all the drinks, mop up the puddles, and quickly as they can they vanish back to the north pole. The sign is turned off and the threads go back to it....
    <inner-dialogue>Too much? It's like 35c in the shade here, so sweating is like... no? never mind </inner-dialogue>
    There are many references to Safepoints to be found online and what follows here is my attempt at a more nuanced use/definition, no sweating mutators from this point onwards.
    A safepoint is a range of execution where the state of the executing thread is well described. Mutator threads are threads which manipulate the JVM heap (all your Java Threads are mutators. Non-Java threads may also be regarded as mutators when they call into JVM APIs which interact with the heap).
    At a safepoint the mutator thread is at a known and well defined point in it's interaction with the heap. This means that all the references on the stack are mapped (at known locations) and the JVM can account for all of them. As long as the thread remains at a safepoint we can safely manipulate the heap + stack such that the thread's view of the world remains consistent when it leaves the safepoint.
    This is particularly useful when the JVM wishes to examine or change the heap, for a GC or any one of many other reasons. If references on the stack were unaccounted for and the JVM was to run a GC it may miss the fact that some objects are still alive (referenced from the stack) and collect them, or it may move some object without updating it's reference on the stack leading to memory corruption.
    A JVM will therefore need means of bringing threads to safepoints (and keeping them there) so that all sorts of runtime magic can happen. Here's a partial list of activities which JVMs run only once all mutator threads are at a safepoint and cannot leave it until released (at a global safepoint), these are sometime called safepoint operations:
    • Some GC phases (the Stop The World kind)
    • JVMTI stack sampling methods (not always a global safepoint operation for Zing))
    • Class redefinition
    • Heap dumping
    • Monitor deflation (not a global safepoint operation for Zing)
    • Lock unbiasing
    • Method deoptimization (not always)
    • And many more!
    A great talk by Azul's own John Cuthbertson from JavaOne 2014 goes into some background on Safepoints and allot of details about safepoint operations other than GC (we at Azul feel GC is a solved problem, so the talk goes into the remaining reasons to stop).
    Note that the distinction between requesting a global safepoint and a thread safepoint exists only for some JVM implementations (e.g. Zing, the Azul Systems JVM. Reminder: I work for Azul). Importantly it does not exist on OpenJDK/Oracle JVMs. This means that Zing can bring a single thread to a safepoint.
    To summarize:
    • Safepoints are a common JVM implementation detail
    • They are used to put mutator threads on hold while the JVM 'fixes stuff up'
    • On OpenJDK/Oracle every safepoint operation requires a global safepoint
    • All current JVMs have some requirement for global safepoints

    When is my thread at a safepoint?

    So having threads at a safepoint allows the JVM to get on with it's managed runtime magic show, great! When is this groovy state happening?
    • A Java thread is at a safepoint if it is blocked on a lock or synchronized block, waiting on a monitor, parked, or blocked on blocking IO. Essentially these all qualify as orderly de-scheduling events for the Java thread and as part of tidying up before put on hold the thread is brought to a safepoint.
    • A Java thread is at a safepoint while executing JNI code. Before crossing the native call boundary the stack is left in a consistent state before handing off to the native code. This means that the thread can still run while at a safepoint.
    • A Java thread which is executing bytecode is NOT at a safepoint (or at least the JVM cannot assume that it is at a safepoint).
    • A Java thread which is interrupted (by the OS) while not at a safepoint is not brought to a safepoint before being de-scheduled.
    The JVM and your running Java threads have the following relationship around safepoints:
    • The JVM cannot force any thread into a safepoint state.
    • The JVM can stop threads from leaving a safepoint state.
    So how can the JVM bring all threads into a safepoint state? The problem is suspending a thread at a known state, not just interrupting it. To achieve this goal JVMs have the Java threads suspend themselves at convenient spots if they observe a 'safepoint flag'.

    Bringing a Java Thread to a Safepoint

    Java threads poll a 'safepoint flag' (global or thread level) at 'reasonable' intervals and transition into a safepoint state (thread is blocked at a safepoint) when they observe a 'Go to safepoint' flag. This is simple enough, but since we don't want to spend all of our time checking if we need to stop the C1/C2 compilers (-client/-server JIT compilers) try and keep safepoint polls to a minimum. On top of the cost of the flag check itself, maintaining a 'known state' adds significant complexity to the implementation of certain optimizations and so keeping safepoints further apart opens up a wider scope for optimization. These considerations combined lead to the following locations for safepoint polls:
    • Between any 2 bytecodes while running in the interpreter (effectively)
    • On 'non-counted' loop back edge in C1/C2 compiled code
    • Method entry/exit (entry for Zing, exit for OpenJDK) in C1/C2 compiled code. Note that the compiler will remove these safepoint polls when methods are inlined.
    If you are the sort of person who looks at assembly for fun (or profit, or both) you'll find safepoint polls in the -XX:+PrintAssembly output by looking for:
    • '{poll}' or '{poll return}'on OpenJDK, this will be in the instructions comments
    • 'tls.pls_self_suspend' on Zing, this will be the flag examined at the poll operation
    This mechanism is implemented differently on different VMs (on x86 + Linux, I've not looked at other architectures):
    • On Oracle/OpenJDK a blind TEST of an address on a special memory page is issued. Blind because it has no branch following it so it acts as a very unobtrusive instruction (usually a TEST is immediately followed by a branch instruction). When the JVM wishes to bring threads to a safepoint it protects the page causing a SEGV which is caught and handled  appropriately by the JVM. There is one such special page per JVM and therefore to bring any thread to a safepoint we must bring all threads to a safepoint.
    • On Zing the safepoint flag is thread local (hence the tls prefix). Threads can be brought to a safepoint independently.
    See related post: Dude, Where's My Safepoint? for more details on polls.
    Once a thread detects the safepoint flag it will perform the safepoint action that the poll triggers. This normally means blocking on some sort of JVM level lock, to be released when the safepoint operation is over. Think of it as a locking mechanism where:

    • Threads can lock themselves out (e.g. by calling into JNI, or blocking at a safepoint).
    • Threads can try and re-enter (when returning from JNI), but if the lock is held by the JVM they will block.
    • The safepoint operation requests the lock and blocks until it own it (when all the mutators have locked themselves out)

    All Together Now

    Now we have all the moving parts of the safepoint process:
    • Safepoint triggers/operations: reasons to request, and stuff to do which requires one or all threads to be at a safepoint
    • Safepoint state/lock: into which Java threads can voluntarily check in, but can only leave when the JVM is done executing the safepoint operation
    • Safepoint poll: the voluntary act by a Java thread to go into a safepoint if one is needed
    Let's see how it all hangs together, for example when a profiler is sampling the stack using JVMTI::GetStackTrace:

    GREEN  :   Executing Java
    YELLOW :   Descheduled Java
    RED    :   Native
    This diagram highlights several components of safepoint operation overheads:
    1. Time To Safe Point(TTSP): Each thread enters a safepoint when it hits a safepoint poll. But arriving at a safepoint poll requires executing an unknown number of instructions. We can see J1 hits a safepoint poll straight away and is suspended. J2 and J3 are contending on the availability of CPU time. J3 grabs some CPU time pushing J2 into the run queue, but J2  is not in a safepoint. J3 arrives at a safepoint and suspends, freeing up the core for J2 to make enough progress to get to a safepoint poll. J4 and J5 are already at a safepoint while executing JNI code, they are not affected. Note that J5 is trying to leave JNI halfway through the safepoint and is suspended before resuming Java code. Importantly we observe that the time to safepoint varies from thread to thread and some threads are paused for longer than others, Java threads which take a long time to get to a safepoint can delay many other threads.
    2. Cost of the safepoint operation: This will depend on the operation. For GetStackTrace the cost of the operation itself will depend on the depth of the stack for instance. If your profiler is sampling many threads (or all the threads) in one go (e.g. via JVMTI::GetAllStackTraces) the duration will grow as more threads are involved. A secondary effect may be that the JVM will take the opportunity to execute some other safepoint operations while the going is good. 
    3. Cost of resuming the threads.
    Some take aways from this analysis:
    1. Long TTSP can dominate pause times: some causes for long TTSP include page faults, CPU over-subscription, long running counted loops.
    2. The cost of stopping all threads and starting all threads scales with the number of threads. The more threads the higher the cost. Assuming some non-zero cost to suspend/resume and TTSP, multiply by number of threads...
    3. Add -XX:+PrintGCApplicationStoppedTime to your list of favourite flags! It prints the pause time and the TTSP.
    But why should you trust me? Lets break some shit!!! Please note in the following examples I will be switching between Oracle 8u60 and Zing (latest dev JDK8 build). I work for Azul Systems on the Zing JVM, so feel free to assume bias on my part. My main motivation in including both JVMs is so that I can demonstrate the effect of -XX:+KeepSafepointsInCountedLoops on Zing.
    NOTE: The benchmarks are not intended to spark a flame war and though Zing happens to compile this particular piece of code 'better' I would not get over excited about it. What is exciting is the extra option Zing makes available here to control and minimize TTSP, and the additional perspective it offers on how a JVM can deal with these challenges.

    Example 0: Long TTSP Hangs Application

    The above code should exit in 5 seconds (or see it on GitHub). The silly computation running in the daemon thread should not stop it from exiting (the JVM should exit if all remaining running Java threads are daemon threads). But if we run this sample we will find that the JVM does not exit after 5 seconds. It doesn't exit at all. Not only does it not exit, it will also refuse incoming JMX connections, it won't respond the jmap/jstack. It won't naturally die, you'll have to 'kill -9' this fucker.
    The reason it's so stuck is because the computation is running in a 'counted' loop, and counted loops are considered short by the JVM. Short enough to not have safepoints. This thread refusing to come to a safepoint will lead to all other threads suspending indefinitely at the next safepoint operation.
    We can fix this (somewhat contrived) problem in several ways:
    1. Prevent C1/C2 compilation: Use -Xint or a compile command to prevent the method compilation. This will force safepoint polls everywhere and the thread will stop hurting the JVM.
    2. Replace the index type with long: this will make the JVM consider this loop as 'uncounted' and include a safepoint poll in each iteration.
    3. On Zing you can apply the -XX:+KeepSafepointsInCountedLoops option on either the JVM or method level (via a -XX:CompileCommand=option,... or via a compiler oracle file). This option is coming shortly to an OpenJDK near you. See also related bug where breaking the loop into an inner/outer loop with safepoints in the outer loop is discussed.
    This example is handy when considering shared JVM solutions. This is perfectly 'safe' Java code that will bring most application containers to their knees.
    Summary: Delaying a safepoint poll indefinitely can be harmful...

    Example 1: More Running Threads -> Longer TTSP, Higher Pause Times

    The benchmark has 1 thread triggering a safepoint operation by calling safepointMethod() (in this case the System.gc() will be called as a result) and many threads scanning a byte array. The byte array scan loop will not have a safepoint poll (it's a counted loop), so the size of the array will act as our intended safepoint poll interval. With this setup we can control the frequency of safepoint operations (with some overhead), avg. distance from safepoint poll and with JMH the number of threads. We can use -XX:+PrintGCApplicationStoppedTime to log the stopped time and TTSP. Armed with this experiment we can now find a large machine to play on and see what happens. I use the command line '-tg' parameter to control the number of threads executing the contains1() method.
    So, here goes on a 48 core Haswell Xeon machine, Cent OS, I'll be using Oracle 8u60. The benchmark class I'm running is here.
    Lets start with no contains1() threads and explain the method and data collection as we go along:
    $ java -jar safepoints.jar -f 1 -i 10 -wi 10 -jvmArgs="-XX:+PrintGCApplicationStoppedTime -Xms1g -Xmx1g" -tg 0,0,0,1 SafepointUsingGc

    There's 4 thread groups because I add another benchmark to run with this crowd in the next sample.
    This spouts allot of PrintGCApplicationStoppedTime printouts similar to:
    Total time for which application threads were stopped: 0.0180172 seconds, Stopping threads took: 0.0000513 seconds

    I'm no Unix shell guru, but with some googling even I can AWK something to summarize this shit:
    grep 'Total time' output.file | awk '{ stopped += $9;ttsp += $14; n++ } END { if (n > 0) print "stopped.avg=" stopped / n ", ttsp.avg=" ttsp / n; }'

    It's summing up the warmup period measurements as well, but I'm willing to let it ride. So with that in mind, lets compare the benchmark output with the reported pause times:
    SafepointUsingGc.run_together:safepoint           100   10   32768     true  avgt   10  118611444.689 ± 1288530.618  ns/op
    stopped.avg=0.015759, ttsp.avg=6.98995e-05

    Or formatted for humans:
    park+gc(ms) |Stp.ms|TTSP.ms|
      119 ± 1.0 | 16.1 |  0.07 |


    We went with the default intervalMs=100, so we'd expect the safepoint() cost to be around 100 + stopped.avg, it's actually 2ms more than that. This is why we have the park() benchmark to baseline against. Running again with the park group enabled ('-tg 0,0,1,1'):
    park(ms)   | park+gc(ms) |Stp.ms|TTSP.ms|
     101 ± 0.8 | 118 ± 1.4   | 15.9 |  0.07 |

    We can see that there's some variance to the GC call and some scheduling variance which can account for delta between our expected value and actual. Small potatoes, lets move on. Start with measuring the cost of the contains1() call when we're not slapping it with a GC every 100ms ('-tg 1,0,1,0' we're running the park baseline):
    contains1(us) | park(ms)    |Stp.ms|TTSP.ms|
      32.2 ± 0.02 | 100.1 ± 0.0 |  0.0 |  0.0  |

    Note the  contains1() method cost is in microseconds. Awesome, now lets ramp up the number of contains1() thread with the GC thread running every 100ms:
    T   |contains1(us) | park(ms)    | park+gc(ms) |Stp.ms|TTSP.ms|
    1   |  51.4 ±  3.1 | 100.7 ± 0.7 | 117.5 ± 0.7 | 15.1 |  0.1  |
    2   |  49.5 ± 17.1 | 101.6 ± 1.9 | 116.9 ± 0.5 | 14.4 |  0.1  |
    4   |  45.5 ±  2.6 | 102.8 ± 4.0 | 116.9 ± 0.7 | 14.4 |  0.1  |
    8   |  45.6 ±  2.1 | 103.6 ± 5.6 | 114.0 ± 0.7 | 13.3 |  0.1  |
    16  |  46.8 ±  1.3 | 103.4 ± 6.8 | 115.0 ± 1.5 | 12.8 |  0.2  |
    32  |  65.7 ±  2.0 | 101.8 ± 0.9 | 113.0 ± 1.3 | 11.4 |  0.7  |
    64  | 121.9 ±  1.9 | 106.0 ± 8.1 | 116.1 ± 1.4 | 14.5 |  5.4  |
    128 | 251.9 ±  6.6 | 103.6 ± 5.3 | 119.6 ± 2.3 | 16.9 |  6.4  |
    256 | 513.2 ± 13.4 | 109.2 ± 9.7 | 123.4 ± 1.7 | 20.7 |  8.0  |
    512 |1111.3 ± 69.7 | 143.9 ±24.8 | 134.8 ±15.3 | 27.8 | 10.0  |


    OK, that escalated not so quickly. Keep in mind we only actually have 48 cores (hyper-threaded), and so when we cross the line between 32 and 64 we are now looking at the effect of contention and scheduling on the TTSP. It is evident that if you have more CPU bound threads than CPUs performance will degrade, but that is not really what we're after.
    At 1 thread we already see a big drop in performance for contains1(). This is somewhat understandable in the presence of 15ms stolen from the application every 100ms.  But where you'd expect a 15% difference we're actually looking at a far more significant drop in performance. Going from 32us to 51us kinda sucks.
    We can see a steady growth in TTSP while within the core count, going from 0.1ms to 0.7ms, this is bad news to anyone looking to keep their latencies in the low microseconds count as touched upon in this post from the brilliant Mark Price. Mark highlights the fact that the JVM regularly calls for a global safepoint (“whether she needed it or not”), so you may find to your delight that these little hiccups come round even when you've been allocating nothing.
    Once we have more threads than cores the TTSP cost goes into the milliseconds, but escalates rather slowly once there. Going from 5ms to 10ms while we go from 64 to 512 threads is better than I expected.
    Summary: By increasing the thread count we increase the TTSP. CPU over-subscription will lead to a further increase in TTSP.  Monitor machines load avg to detect this kind of misconfiguration.

    Example 2: Long TTSP has Unfair Impact

    Building on previous benchmark, but now we have 2 methods: contains1() and contains1toK()contains1toK() is K times further away from a safepoint poll (outer loop is counted, the contains needle methods gets inlined removing the method exit safepoint and the loop inside it is counted). By increasing K we can see the impact of one thread on the other, in particular we expect the cost of contains1() to increase as the cost of contains1toK() increases, in the presence of safepoint operations. The unfairness here being that while contains1toK() is making progress it is making contains1(legen...) wait for it (...DARY!). Because using GC as the safepoint operation is rather heavy handed (even though it's a small heap and there's no garbage) we'll switch to using another safepoint operation for this case:  ManagementFactory.getThreadMXBean().findDeadlockedThreads(). This is allot more lightweight than GC but becomes more expensive as we pile on threads (not by that much, but still) so I did not use it for prior example (though come to think of it the number of threads should also impact GC length a bit, oh well). Anyway, variety is the spice of life. Here's the results with both methods and park(), but no safepoint operation (benchmark class is here):
    K     |contain1(us)|   containsK(us)    | park(ms)    |Stp.ms|TTSP.ms|
    1     | 32.3 ± 0.1 |     34.8 ±     1.7 | 100.0 ± 0.0 |  0.0 |  0.0  |
    10    | 32.4 ± 0.5 |    349.7 ±    18.0 | 100.0 ± 0.0 |  0.0 |  0.0  |
    100   | 38.4 ± 2.8 |   3452.7 ±   130.4 | 100.0 ± 0.0 |  0.2 |  0.1  |
    1000  | 39.8 ± 1.1 |  33419.9 ±   421.6 | 100.0 ± 0.0 |  0.0 |  0.0  |
    10000 | 41.7 ± 5.0 | 336112.5 ± 10306.5 | 100.5 ± 1.7 |  0.0 |  0.0  |

    Indeed the cost of contains1toK() grows linearly with K and the 2 methods don't seem to impact each other that significantly.
    Here's what happens as we throw in the safepoint ops:
    K     |contains1(us)|   containsK(us)   | park(ms)     | park+FDT(ms) |Stp.ms |TTSP.ms|
    1     |  39.7 ± 0.9 |     39.6 ±    1.4 | 100.0 ±  0.0 | 100.3 ±  0.0 |   0.1 |   0.1 |
    10    |  40.0 ± 1.0 |    400.1 ±   14.4 | 100.0 ±  0.0 | 100.5 ±  0.0 |   0.2 |   0.2 |
    100   |  41.8 ± 1.1 |   3835.4 ±  141.2 | 100.1 ±  0.0 | 102.7 ±  0.1 |   2.1 |   2.0 |
    1000  |  67.4 ± 3.5 |  37535.7 ± 2237.2 | 102.5 ±  1.9 | 127.1 ±  1.1 |  20.7 |  20.7 |
    10000 | 183.4 ± 8.5 | 346048.6 ± 9818.2 | 194.9 ± 11.5 | 348.4 ± 11.4 | 152.8 | 152.8 |

    A terrible thing, right? The annoying/unfair thing is that being a shit has no effect on contains1toK() performance. It scales linearly and seems oblivious to the havoc it's wrecking for everyone else. Note that the problem we see here was present in the run where no safepoint operations were invoked, there were just no safepoint operations to bring it out. On a long enough timeline a safepoint operation will come along and around it we'll have seen the same issue.
    How can we fix it, and at what cost? Lets explore some options.

    Solution 1: I use Zing -> use -XX:+KeepSafepointsInCountedLoops

    So Zing is super awesome, and has this option to keep safepoint polls in counted loops which can come in handy in cases where one particular method is killing you softly with TTSP. Here's the results we get with Zing when running the benchmark above (same jvmArgs, same machine etc):
    K     |contains1(us) |   containsK(us)   | park(ms)     | park+FDT(ms) | Stp.ms|
    1     |  31.9 ±  0.9 |    30.5 ±     1.0 | 100.0 ±  0.0 | 100.2 ±  0.0 |   0.0 |
    10    |  32.9 ±  0.2 |   317.8 ±     8.8 | 100.0 ±  0.0 | 100.2 ±  0.0 |   0.0 |
    100   |  34.1 ±  2.4 |  3337.8 ±   265.4 | 100.1 ±  0.0 | 100.8 ±  0.2 |   0.4 |
    1000  |  44.4 ±  1.9 | 33991.6 ±  1199.0 | 100.9 ±  0.2 | 117.0 ±  0.6 |  10.2 |
    10000 | 159.0 ± 12.1 |299839.3 ±  6297.2 | 156.3 ±  8.0 | 319.8 ± 30.5 | 113.4 |

    Still bad, but Zing happens to squeeze better performance from the containsNeedle() method and while contains1() performance degrades it's opening a wide gap with observed behaviour above probably because less time is lost to TTSP (because contains1toK() finishes quicker). Also note that -XX:+PrintGCApplicationStoppedTime only prints out the stopped time for Zing. A separate command line argument is available which prints out safepoint statistics, but as this post grows longer I'm losing my will to rerun and parse stuff...
    We add the JVM argument: "-XX:CompileCommand=option,*.contains1ToK,+KeepSafepointsInCountedLoops" and watch the world burn:
    K     |contain1(us)|   containsK(us)   | park(ms)    | park+FDT(ms)| Stp.ms|
    1     | 33.0 ± 0.6 |     40.1 ±    1.2 | 100.0 ± 0.0 | 100.2 ± 0.0 |   0.0 |
    10    | 32.8 ± 0.7 |    405.1 ±   13.1 | 100.0 ± 0.0 | 100.2 ± 0.0 |   0.0 |
    100   | 33.3 ± 0.7 |   4124.8 ±  104.4 | 100.0 ± 0.0 | 100.3 ± 0.0 |   0.1 |
    1000  | 32.9 ± 0.2 |  40839.6 ±  854.0 | 100.0 ± 0.0 | 101.0 ± 0.3 |   0.6 |
    10000 | 40.2 ± 2.9 | 374935.6 ± 7145.3 | 111.3 ± 3.7 | 100.2 ± 0.0 |   0.0 |

    SHAZZZING!... This is pretty fucking awesome. The impact on contains1toK() is visible, but I didn't have to change the code and the TTSP issue is gone man, solid GONE. A look at the assembly verifies that the inner loop is still unrolled within the outer loop, with the addition of the requested safepoint polls. There are more safepoint polls than I'd like here, since the option is applied to both the outer method and the inlined method (to be improved upon sometime down the road...).

    Solution 2: I use OpenJDK/Oracle, disable inner loop inlining

    If the inner loop is in it's own method as our example, or if we can perhaps modify the code such that it is, we can ask the compiler to not inline that method. This should work out nicely here. To force the JVM to not inline the containsNeedle() method only in contains1toK() I made a copy of containsNeedle() called containsNeedle2()  and used this argument: "-XX:CompilerCommand=dontinline,*.containsNeedle2". The results:
    K     |contain1(us)|   containsK(us)   | park(ms)    | park+FDT(ms)|Stp.ms|TTSP.ms|
    1     | 39.9 ± 1.0 |     40.0 ±    2.6 | 100.0 ± 0.0 | 100.3 ± 0.0 |  0.1 | 0.0   |
    10    | 39.9 ±  .7 |    395.6 ±   10.0 | 100.0 ± 0.0 | 100.3 ± 0.0 |  0.1 | 0.0   |
    100   | 41.3 ± 3.9 |   4030.6 ±  405.0 | 100.0 ± 0.0 | 100.3 ± 0.0 |  0.1 | 0.0   |
    1000  | 39.6 ± 2.1 |  34543.8 ± 1409.9 | 100.0 ± 0.0 | 100.3 ± 0.0 |  0.1 | 0.0   |
    10000 | 39.1 ± 1.2 | 328380.6 ± 8478.1 | 100.0 ± 0.0 | 100.3 ± 0.0 |  0.1 | 0.0   |

    That worked well! No harm to contains1(), no TTSP issue, and contains1toK() performance didn't suffer for it. Inlining is usually a key optimization you want to make sure happens, but in this case the right thing to do is stop it from happening. The work inside containsNeedle() is chunky enough to make this a good tradeoff. If the external loop was at a very high count, and the array size rather small this would have been a very different story.

    Solution 3: Last resort. stop compiling the offending method

    This is a bit terrible, but if things get bad AND you can't change the code, you can use a CompileCommand to exclude the method from compilation. The method will be force to stay in the interpreter, so will effectively have a safepoint poll between every 2 bytecodes. Performance will suck, but at least the application will be responsive. I am not gonna run this option, but if you want to add the following argument to the command line and watch it roll: "-XX:CompilerCommand=exclude,*.contains1toK"

    Summary: Threads which suffer from large TTSP will cause other threads to wait for them in the presence of a global safepoint attempt.

    Example 3: Safepoint Operation Cost Scale


    For this example we replace the safepoint() method with gathering thread stacks using ManagementFactory.getThreadMXBean().dumpAllThreads(false, false). Gathering thread stacks also occurs at a safepoint and the safepoint operation cost will scale with the number of threads. We construct a number of idle threads in the setup method to control the safepoint operation cost. To make sampling the threads stack expensive we park the fuckers 256 frames deep.
    Different safepoint operations scale with different aspects of your process. Certain GC algorithms stop the world phases will scale with heap size. Profiling by collecting thread dumps (as done by JVisualVM sampler for instance) will scale with the number of threads (idle or not) and the stack depth. Deflating monitors will scale with the number of monitors in your application etc.
    Here's what happens as we pile on the idle threads (running only contains1()):
    IT   |  contain1(us)|  park(ms)    |  park+DTS(ms) | Stp.ms|TTSP.ms|
    0    |  38.0 ±  1.6 | 100.0 ±  0.0 |  100.5 ±  0.0 |   0.2 |   0.0 |
    1    |  40.7 ±  1.4 | 100.1 ±  0.0 |  102.2 ±  0.0 |   0.6 |   0.0 |
    10   |  44.7 ±  0.8 | 100.2 ±  0.1 |  117.2 ±  0.1 |   5.5 |   0.0 |
    100  |  75.5 ±  0.8 | 122.6 ±  3.5 |  261.9 ±  2.3 |  61.3 |   0.1 |
    1000 | 204.5 ± 38.5 | 257.5 ± 28.8 | 1414.2 ± 47.0 | 313.0 |   1.0 |

    We get long stops and short TTSPs! hoorah! You can play with the number of threads and the depth of the stack to get the effect you want. It is interesting that most profilers using JVMTI::Get*StackTrace opt for sampling all threads. This is risky to my mind as we can see above the profiling overhead is open ended. It would seem reasonable to sample 1 to X threads (randomly picking the threads we sample) to keep the overhead in check on the one hand and pick a reasonable balance of TTSP vs operation cost on the other.
    As mention previously, JVMTI::GetStackTrace does not cause a global safepoint on Zing. Sadly AFAIK no commercial profilers use this method, but if they did it would help. The JMX thread bean does expose a single thread profiling method, but this only delegates to the multi-thread stack trace method underneath, which does cause a global safepoint...

    Summary: This final step adds another interesting control for us to play with completing our set:
    • Safepoint poll interval -> Will drive time to safepoint. CPU scarcity, swapping and page faults are other common reasons for large TTSP.
    • Safepoint operation interval -> Sometimes in your control (be careful how you profile/monitor your application), sometimes not.
    • Safepoint operation cost -> Will drive time in safepoint. Will depend on the operation and your application specifics. Study your GC logs.

      Example 4: Adding Safepoint Polls Stops Optimization

      This is a repeat of a benchmark I ran to demonstrate the side effect of safepoint polls in loops on loop optimizations in the context of changing one of the Netty APIs. This post is so long by now that if you had the strength to read this far you can probably click through to the ticket. I'll quote the basic results here for completeness.
      The benchmark:
      We are debating replacing the length and DataSet index types from int to longs. The impact on usage inside loops is substantial as demonstrated by the following results:
      Benchmark   (size)  Mode  Cnt    Score     Error  Units
      copyInt       1000  avgt    5   76.943 ±  14.256  ns/op
      copyLong      1000  avgt    5  796.583 ±  55.583  ns/op
      equalsInt     1000  avgt    5  367.192 ±  16.398  ns/op
      equalsLong    1000  avgt    5  806.421 ± 196.106  ns/op
      fillInt       1000  avgt    5   84.075 ±  19.033  ns/op
      fillLong      1000  avgt    5  567.866 ±  10.154  ns/op
      sumInt        1000  avgt    5  338.204 ±  44.529  ns/op
      sumLong       1000  avgt    5  585.657 ± 105.808  ns/op
      There are several optimizations negated by the transition from counted to un-counted loop (uncounted loops currently include loops limited by a long, and therefore include a safepoint):
      • Loop unrolling :  This is not really a safepoint issue, can be fixed as per the KeepSafepointsInCountedLoops option
      • Fill pattern replacement (-XX:+OptimizeFill): the current fill implementation has no safepoints and the loop pattern it recognizes does not allow it.
      • Superword optimizations (-XX:+UseSuperword): current Superword implementation doesn't allow for long loops. This will probably require the compiler to construct an outer loop with an inner superword loop 
      The latter 2 have been discussed on this blog before in some detail.

      Final Summary And Testament

      If you've read this far, you deserve a summary, or a beer, or something. WELL DONE YOU!!! Here's what I consider the main takeaways from this discussion/topic:
      • Safepoint/Safepoint poll/Safepoint operation are 3 distinct terms, often mingled. Separating them allows us to consider their costs and impacts individually.
      • In most discussions the term Safepoint is used to describe a global safepoint. Some JVMs can control safepoint transition on the thread level. By considering the individual thread state we can more easily reason about the transition costs on a thread by thread level. 
      • The failure of any thread to reach a safepoint (by hitting a safepoint poll) can cripple the whole JVM. In this sense most any bit of Java code can be considered blocking (as in not lock free). Consider for example a CAS loop, it's not counted, it will have a safepoint poll in it, the thread executing can be blocked indefinitely if another Java thread is suspended while not at a safepoint.
      • JVMs have many safepoint operations. If you are worried about their impact you should enable -XX:+PrintGCApplicationStoppedTime. Further detail on what types of safepoint operations were involved can be made available using -XX:+PrintSafepointStatistics –XX:PrintSafepointStatisticsCount=X. Try it out, it's educational.

      References

      • From the OpenJDK Runtime Overview look for the "VM Operations and Safepoints" section. Note that the term safepoint is used here to mean a global safepoint.
      • J.Bempel's post covering many safepoint operations and observing the impact of safepoints.
      • Alexey Ragozin's post also covers similar definition to the Runtime Overview and explores the safepoint poll mechanism and logging options.
      • Gil Tene post on the Mechanical Sympathy mailing list which closely matches the "Thread State" approach above. Gil goes into the topic in the context of Unsafe operations and assumptions surrounding safepoints. 

      Wait For It: Counted/Uncounted loops, Safepoints and OSR Compilation

      $
      0
      0
      In this previous post about Safepoints I claimed that this here piece of code:

      Will get your JVM to hang and will not exit after 5 seconds, as one might expect from reading the code. The reason this happens, I claimed, is because the compiler considers for loops limited by an int to be counted, and will therefore not insert a safepoint poll into the generated assembly code. This was the theory, and I tested it, and it definitely hung on my machine ('It Works On My Machine'™).
      To my great surprise, some people not only read the posts, they also run the code (it's like they don't trust me ;-) ). And 2 of those diligent readers got back to me in the comments saying that actually the JVM exists just fine. So I ran it again, and it hung, again. Then I ran it with different versions of the JVM, and it hung as I expected. Then I thought, maybe they are running the code I put on GitHub, so I tried that. That also hung when I ran it from Eclipse, but did not when I built it with Maven and ran it from the command line. EUREKA!
      As it happens, Eclipse has it's own Java compiler (a fact well known to those who know it well, no doubt) and the bytecode it generated (the .class file) hung every time, but Maven uses javac and the .class file generated from that did not hang. When testing initially I was running my class inside Eclipse, or using the Eclipse .class file to run from the command line. Both .class files are valid, but different (same-same but different). We can use javap to print out class bytecode and compare ("javap -c -v -p <class file>"). For your pleasure you can find the original Java file, the 2 class files and their respective bytecode printouts on GitHub and come to your own conclusions before you continue reading.
      So... is my understanding (and the Eclipse javac) somehow out of date? can this problem of telling counted loops from uncounted loops be solved by the bytecode compiler?

      How is the bytecode different?

      Well, mostly not that different:
      1. An easy to spot difference which we can tell is semantically the same is the bytecode result for "System.out.println("How Odd:" + l);" you'll notice that Eclipse is using the constructor "java/lang/StringBuilder."<init>":(Ljava/lang/String;)" while javac is opting for "java/lang/StringBuilder."<init>":()V" followed by an append. Semantically the 2 choices are the same. We create a StringBuilder and append the String "How odd:" and the variable l. The reality is that there are subtle differences, the Eclipse version will allocate a larger char[] for instance. It may also be that the Eclipse pattern will miss out on some String concatenation optimization because the OpenJDK implementers are likely to focus on patterns generated from the official javac.
      2. The loop interpretation is different. Eclipse is checking the loop conditions at the end of the loop whereas javac is building a loop with the checks upfront and the loop backedge at the end. That seems like the important difference.
      Let's zoom in on the relevant bytecode for the loop:

      If we were to reverse translate the loop bytecode to ugly Java (which supported goto) we'd get:

      Note that there's no special Java bytecode for loop control flows. So why so different at runtime? If we dump out the assembly for the run method we will see that the Eclipse version is missing the safepoint in the loop (the method has a {poll_return} but there's no {poll}), and the javac version has that safepoint there. Is it the bytecode difference that makes this happen?

      Variations on bytecode

      I tinkered with the class a bit and refactored the loop out of the lambda. This results in similar bytecode for the loop and same behaviour. I then made the limit of the loop a parameter of the method, this time it behaved like the javac version, but had the same funky loop structure:

      WTF? if this is a purely a bytecode subtlety then it's seems pretty weird. What sort of flaky compiler would swing like this?

      The compiler did it?

      But... which compiler? so we looked at the above bytecode compiler results, this is but the first of many compilers to process your code on it's way to being executed. Traditionally when discussing Java code compilation we look at the following stages:
      1. Javac: From code to bytecode
      2. Interpreter: Step through the bytecode
      3. C1/Client: Short compilation time, does not generate the best optimized code. Intended for short lived applications.
      4. C2/Server: Longer compilation time, better optimized code. Intended for long running applications.
      And the compilation timeline would look something like this:
                                                         /->C1
      *.java->javac->*.class->interpreter->class+profile<
                                                         \->C2
      The situation with OpenJDK8 is more complex with Tiered Compilation allowing us a smooth transition from interpreter to C2 compilation via C1 compilations. 
      Given the behaviour we are after was supposed to manifest after a C1/C2 compilation we can run the code above to find which compiler compiled the method in question by adding the flag: -XX:+PrintCompilation:
          159  201 %     3       safepoint.hang.WhenWillItExitInt2::countOdds @ 13 (44 bytes)
          159  202       3       safepoint.hang.WhenWillItExitInt2::countOdds (44 bytes)
          159  203 %     4       safepoint.hang.WhenWillItExitInt2::countOdds @ 13 (44 bytes)
          162  201 %     3       safepoint.hang.WhenWillItExitInt2::countOdds @ -2 (44 bytes)   made not entrant
          142  200 %     3       safepoint.hang.WhenWillItExitInt2::countOdds @ -2 (44 bytes)   made not entrant
         1896  202 %     4       safepoint.hang.WhenWillItExitInt2::countOdds @ -2 (44 bytes)   made not entrant
         1896  203 %     3       safepoint.hang.WhenWillItExitInt2::countOdds @ 13 (44 bytes)
         1896  204 %     4       safepoint.hang.WhenWillItExitInt2::countOdds @ 13 (44 bytes)
         1899  203 %     3       safepoint.hang.WhenWillItExitInt2::countOdds @ -2 (44 bytes)   made not entrant
      The 3/4 printouts on the 4th column above indicate the method is compiled by C1 (profiled) and C2. But what is that % about? This is an indicator that the compilations are not 'normal' compilations but rather On Stack Replacement(OSR) compilations. OSR compilations are 'special', from the OpenJDK glossary:
      on-stack replacement
      Also known as 'OSR'. The process of converting an interpreted (or less optimized) stack frame into a compiled (or more optimized) stack frame. This happens when the interpreter discovers that a method is looping, requests the compiler to generate a special nmethod with an entry point somewhere in the loop (specifically, at a backward branch), and transfers control to that nmethod. A rough inverse to deoptimization.
      I like the "rough inverse to deoptimization", it's a great mental qualification helper. OSR is the opposite of deoptimization because it is a mid-method replacement of code with compiled code, whereas a deoptimization is a mid-method replacement of code with interpreted code. Normal compilation unit replacements happen out of method, so while the method is off stack. OSR is specifically done to replace long running loops, like our countOdds method. As we can see from the printout above both C1 and C2 can produce OSR compilations.
      It is impossible to disable OSR directly in a product build (there's a developer option -XX:-CICompileOSR), but we can force upfront compilation by using the following flags: "-XX:-TieredCompilation -server/client -Xcomp". Running with these arguments all 3 variations (original, refactored loop, refactored parametrized loop) now hang both for the Eclipse and the javac generated bytecode.
      In both cases where the Eclipse versions hang we can see OSR compilations of the relevant code.
      So it seems:
      • C1/C2 non-OSR compilations of the loop treat it as a counted loop, thus eliminating the safepoint poll and demonstrating the hung behaviour due to huge TTSP.
      • C1/C2 OSR compilations of the same code result in different results depending on the generated bytecode for the loop. I'm not going to dig further into the reasons, but I would welcome any feedback from OpenJDK compiler guys.
      The OSR compilation decision to treat the loop as non-counted is quite reasonable. If we have a loop which remains on stack for very long periods we should perhaps consider it uncounted.

      Killer Task

      I started down this path looking for valid Java code which will result in a hung JVM due to huge TTSP. Since OSR got in the way, can we build such a task? Sure we can, we just need to warmup the loop until it is C2 compiled and then run it with a large limit, like so:
      This fucker now hangs in both the Eclipse and javac versions. Joy!

      Side note: OSR and JMH

      I look at benchmarks people write quite a bit, and it is very common to see people rolling their own benchmark harnesses put their "code under measurement" in a counted loop, run it 100,000 times and derive their results. Often that means the code they are measuring is OSR compiled code, which is different (for better or worse, mostly worse) from the code they will be running normally. If you are one of these people, I beg you: cut that fucking shit out!
      JUST USE JMH!
      PLEASE?
      To a large extent it will cover your use case. Where it doesn't cover your use case, post a question on jmh-dev mailing list, it might be that you've missed something. If it turns out JMH cannot cover your usecase, make sure you've read the JMH samples before rolling your own harness.
      I'm not going to include a before and after example, because it just makes me sad. Look at SO for some fine examples.

      Counted Loops?

      I keep saying 'counted loops' like it means something, so here are some examples for you to play with and contemplate:

      So, as you can see the definition ain't so clear and I would imagine some of the above behaviour will vary depending on JVM version/vendor/phase of the moon etc.

      Summary

      So, it seems my previous example was not as tested as it should have been. I apologize to anyone who has based life changing decisions on that sample. I hope this post helps in explaining away the subtleties which led to my downfall.
      The above examples help demonstrate that the line between counted and uncounted loops is a tad confusing and the classification may depend on differences in bytecode, compiler mode, and so on.

      Why (Most) Sampling Java Profilers Are Fucking Terrible

      $
      0
      0
      This post builds on the basis of a previous post on safepoints. If you've not read it you might feel lost and confused. If you have read it, and still feel lost and confused, and you are certain this feeling is related to the matter at hand (as opposed to an existential crisis), please ask away.
      So, now that we've established what safepoints are, and that:
      1. Safepoint polls are dispersed at fairly arbitrary points (depending on execution mode, mostly at uncounted loop back edge or method return/entry).
      2. Bringing the JVM to a global safepoint is high cost
      We have all the information we need to conclude that profiling by sampling at a safepoint is perhaps a bit shite. This will not come as a surprise to some, but this issue is present in the most commonly used profilers. According to this survey by RebelLabs this is the breakdown:


      VisualVM, NB Profiler(same thing), YourKit and JProfiler all provide a sampling CPU profiler which samples at a safepoint. Seeing how this is a rather common issue, lets dig into it.

      How Sampling Execution Profilers Work (in theory)

      Sampling profilers are supposed to approximate the 'time spent' distribution in our application by collecting samples of where our application is at different points in time. The data collected at each sample could be:
      • current instruction
      • current line of code
      • current method
      • current stack trace
      The data can be collected for a single thread or all threads at each sample. What do we need to hold for sampling to work?
      "However, for sampling to produce results that are comparable to a full (unsampled) profile, the following two conditions must hold. First, we must have a large number of samples to get statistically significant results. For example, if a profiler collects only a single sample in the entire program run, the profiler will assign 100% of the program execution time to the code in which it took its sample and 0% to everything else. [...]
      Second, the profiler should sample all points in a program run with equal probability. If a profiler does not do so, it will end up with bias in its profile. For example, let’s suppose our profiler can only sample methods that contain calls. This profiler will attribute no execution time to methods that do not contain calls even though they may account for much of the program’s execution time." - from Evaluating the Accuracy of Java Profilers, we'll get back to this article in a bit
        That sounds easy enough, right?
        Once we have lots of samples we can construct a list of hot methods, or even code lines in those methods (if the samples report it), we can look at the samples distributed on the call tree (if call traces are collected) and have an awesome time!

        How Do Generic Commercial Java Sampling Execution Profilers Work?

        Well, I can sort of reverse engineer here from different solutions, or read through open source code bases, but instead I'll offer unsupported conjecture and you are free to call me out on it if you know better. Generic profilers rely on the JVMTI spec, which all JVMs must meet:
        • JVMTI offers only safepoint sampling stack trace collection options (GetStackTrace for the calling thread doesn't require a safepoint, but that is not very useful to a profiler. On Zing  GetStackTrace to another thread will bring only that thread to a safepoint.). It follows that vendors who want their tools to work on ALL JVMs are limited to safepoint sampling.
        • You hit a global safepoint whether you are sampling a single thread or all threads (at least on OpenJDK, Zing is slightly different but as a profiler vendor OpenJDK is your assumption.). All profilers I looked into go for sampling all threads. AFAIK they also do not limit the depth of the stack collected. This amounts to the following JVMTI call: JvmtiEnv::GetAllStackTraces(0, &stack_info, &thread_count)
        • So this adds up to: setup a timer thread which triggers at 'sampling_interval' and gathers all stack traces.
        This is bad for several reasons, some of which are avoidable:
        1. Sampling profilers need samples, so it is common to set sampling frequency is quite high (usually 10 times a second, or every 100ms). It's instructive to set the -XX:+PrintGCApplicationStoppedTime and see what sort of pause time this introduces. It's not unusual to see a few milliseconds pause, but YMMV(depending on number of threads, stack depth, TTSP etc). A 5ms pause every 100ms will mean a 5% overhead (actual damage is likely worse than that) introduced by your profiler. You can usually control the damage here by setting the interval longer, but this also means you will need a longer profiling period to get a meaningful sample count.
        2. Gathering full stack traces from all the threads means your safepoint operation cost is open ended. The more threads your application has (think application server, SEDA architecture, lots of thread pools...), and the deeper your stack traces (think Spring and Co.) the longer your application will wait for a single thread to go round taking names and filling forms. This was clearly demonstrated in the previous post. AFAIK, current profilers do nothing to help you here. If you are building your own profiler it would seem sensible to set a limit on either quantities so that you can box your overheads. The JVMTI functionality allows you to query the list of current threads, you could sample all if there's less than a 100 and otherwise pick a random subset of 100 to sample. It would make sense to perhaps bias towards sampling threads that are actually doing something as opposed to threads which spend all their time blocked.
        3. As if all that was not bad enough, sampling at a safepoint is a tad meaningless.
        Points 1 and 2 are about profiling overheads, which is basically about cost. In my previous post on safepoints I looked at these costs, so there's no point repeating the exercise.  Cost may be acceptable for good profiling information, but as we'll see the information is not that great.
        Point 3 bears explaining, so off we go in the search for meaning.

        Safepoint Sampling: Theory

        So what does sampling at a safepoint mean? It means only the safepoint polls in the running code are visible. Given hot code is likely compiled by C1/C2 (client/server compilers) we have reduced our sampling opportunities to method exit and uncounted loop backedges. This leads to the phenomena called safepoint bias whereby the sampling profiler samples are biased towards the next available safepoint poll location (this fails the second criteria set out above "the profiler should sample all points in a program run with equal probability").
        This may not sound so bad at first, so lets work through a simple example and see which line gets the blame.
        NOTE: In all of the following examples I will be using JMH as the test harness and make use of the 'CompilerControl' annotation to prevent inlining. This will let me control the compilation unit limits, and may seem cruel and unusual, or at least unfair, of me. Inlining decisions in the 'wild' are governed by many factors and it is safe (in my opinion) to consider them arbitrary (in the hands of several compilers/JVM vendors/command line arguments etc.). Inlinining may well be the "mother of all optimizations", but it is a fickle and wily mother at that.
        Let's look at something simple:

        This is an easy example to think about. We can control the amount of work in the method by changing the size of the array. We know the counted loop has no safepoint poll in it (verified by looking at the assembly output), so in theory the methods above will have a safepoint at method exit. The thing is, if we let the method above get inlined the end of method safepoint poll will disappear, and the next poll will be in the measurement loop:

        So, it would seem reasonable to expect the method to get blamed if it is not inlined, but if it does get inlined we can expect the measurement method to get blamed. Right? very reasonable, but a bit off.

        Safepoint Sampling: Reality

        Safepoint bias is discussed in this nice paper from 2010: Evaluating the Accuracy of Java Profilers, where the writers recognize that different Java profilers identify different hotspots in the same benchmarks and go digging for reasons. What they don't do is set up some benchmarks where the hotspot is known and use those to understand what it is that safepoint biased profilers see. They state:
        "If we knew the “correct” profile for a program run, we could evaluate the profiler with respect to this correct profile. Unfortunately, there is no “correct” profile most of the time and thus we cannot definitively determine if a profiler is producing correct results."
        So, if we construct a known workload... what do these profilers see?
        We'll study that with the JMH safepoint biased profiler "-prof stack". It is much like JVisualVM in the profiles it presents for the same code, and a damn sight more convenient for this study.
        NOTE: In the following sections I'm using the term sub-method to describe a method which is being called into from another method. E.g. if method A calls method B, B is a sub method of A. Maybe a better terminology exists, but that's what I mean here.
        If we run the samples above we get 2 different hot lines of code (run with '-prof stack:detailLine=true'):
        # Benchmark: safepoint.profiling.SafepointProfiling.meSoHotInline
        ....[Thread state: RUNNABLE]...
         99.6%  99.8% meSoHotInline_avgt_jmhStub:165

        # Benchmark: safepoint.profiling.SafepointProfiling.meSoHotNoInline
        ....[Thread state: RUNNABLE]...
         99.4%  99.6% meSoHotNoInline_avgt_jmhStub:163

        Neither one in the actually hot method. It seems that the method exit safepoint is not deemed indicative of it's own method, but rather of the line of code from which it was called. So forcing the method under measurement to not inline implicated the calling line of code in the measurement loop, while leaving it to inline meant the back edge of the loop got the blame. It also appears that an uncounted loop safepoint poll is deemed indicative of it's own method.
        We may deduce(but we won't necessarily be correct) that when looking at this kind of profile without line of code data a hot method is indicative of:
        1. Some non-inlined sub method is hot*
        2. Some code (own method? inlined sub method? non-inlined sub method?) in an uncounted loop is hot*
        Having line of code data can help disambiguate the above cases, but it's not really very useful as line of code data. A hot line of code would be indicative of:
        1. Line has a method call: A method called from this line is hot (or maybe it's inlined sub-methods are)*
        2. Line is a loop back edge: Some code (include inlined submethods) in this loop is hot*
          *Does that seem useful? Don't get your hopes up.
        Because we usually have no idea which methods got inlined this can be a bit confusing (you can use -XX:+PrintInlining if you want to find out, but be aware that inlining decisions can change from run to run).

        Mind The Gap

        If the above rules held true you could use a safepoint biased profile by examining the code under the blamed node in the execution tree. In other words, it would mean a hot method indicates the hot code lies somewhere in that code or in the method it calls. This would be good to know, and the profile could serve as a starting point for some productive digging. But sadly, these rules do not always hold true. They are missing the fact that the hot code can be anywhere between the indicated safepoint poll and the previous one. Consider the following example:

        Clearly, the time is spent in the loop before calling setResult, but the profile blames setResult. There's nothing wrong with setResult except that a method it calls into is not inlined, providing our profiler with a blame opportunity. This demonstrates the randomness with which the safepoint poll opportunities present themselves to  user code, and shows that the hot code may be anywhere between the current safepoint poll and the previous one. This means that a hot method/line-of-code in a safepoint biased profile are potentially misleading without some idea of where the previous safepoint poll is. Consider the following example:

        The profiler implicates the caller to a cheap method 9 levels down the stack, but the real culprit is the loop at the topmost method. Note that inlining will prevent a method from showing, but not-inlined frames only break the gap between safepoints on return (on OpenJDK anyhow. This is vendor and whim dependent. e.g. Zing puts the method level safepoint on entry for instance and I'm not sure where J9 stands on this issue. This is not to say that one way is somehow better than the other, just that the location is arbitrary). This is why setResult6 which is not inlined and higher up the stack doesn't show up.

        Summary: What Is It Good For?

        As demonstrated above, a safepoint sampling profiler can have a wildly inaccurate idea of where the hot code in your application is. This renders derived observations on "Running" threads pretty suspect, but they are at least correct observations on which threads are running. This doesn't mean they are entirely useless and sometimes all we need is a hint in the right direction for some good analysis to happen, but there's a potential for huge waste of time here. While samples of code running in the interpreter do not suffer safepoint bias, this is not very useful as hot code is quickly compiled. If your hot code is running in the interpreter you have bigger fish to fry than safepoint bias...
        The stack traces for blocked threads are accurate, and so the "Waiting" profile is useful to discover blocked code profile. If blocking methods are at the root of your performance issue than this will be a handy observation.
        There are better options out there! I'll get into some of them in following posts:
        - Java Mission Control
        - Solaris Studio
        - Honest-Profiler
        - Perf + perf-map-agent (or perfasm if your workload is wrapped in a JMH benchmark)
        No tool is perfect, but all of the above are a damn sight better at identifying where CPU time is being spent.
        Big thanks to the many reviewers, if you find any issues please comment and I will punish them severely.

        GC 'Nepotism' And Linked Queues

        $
        0
        0
        I've just run into this issue this week, and it's very cute, so this here is a summary. Akka has their own MPSC linked queue implementation, and this week in was suggested they'd swap to using JCTools. The context in which the suggestion was made was a recently closed bug with the mystery title: AbstractNodeQueue suffers from nepotism
        An Akka user claimed to not be suffering from the bug because they were using JCTools, but as it turned out the issue was indeed present so I had to fix it etc... But what the hell is GC nepotism? You won't find it in your GC textbooks, but it's a catchy term for a very real problem.

        Your Dead Olds Relatives Can Promote You!

        In the open JDK a generational GCs, there are distinct generations, Old and New. The 2 generations get collected separately, but are not independent from each other. In particular, references from NewGen to OldGen will keep OldGen references alive through an OldGen collection cycle, and similarly references from OldGen to NewGen will keep objects alive through a NewGen collection. This is all quite straight forward, but has a surprising implication:
        Dead OldGen objects will keep dead NewGen objects they reference alive until collected by an OldGen.
        The reverse is also true, but since the whole notion of generational GC is based on the assumption that new gen is collected far more often then the old gen the above use case is the real issue. How does this effect you? Is this even a real problem?
        Turns out it's real enough to have convinced Doug Lea to fix a particular manifestation of it, but also subtle enough to have slipped past him (and me, and the Akka guys, and Martin Thompson, and others no doubt) in the first place.

        Linked Queues: Pathological Nepotism

        Consider the following LinkedQueue implementation:
        The following invariants are maintained:
        • head == tail <=> queue is empty
        • head.next == null <=> queue is empty
        • tail.value == null
        • queue is empty <=> head == tail && tail.next == null && tail.value == null
        The nodes are created and linked together on offer, and dereferenced as we poll. The poll method clears MUST clear the value because the queue retains the last node. If we were to let the last node keep the value we will have effectively leaked this value reference.
        Ignoring the values, imagine the queue is part of the live set and we have enqueued a few elements, the arrow denotes what next references:
        head -> N3 -> null
        tail -> N0 -> N1 -> N2 -> N3

        If we now dequeue the elements we'll get back to empty, brackets denote live referenced from dead:
        head -> N3 -> null
        tail -> N3 -> null
        DEAD:  N0 -> N1 -> N2 (-> N3)

        Dead objects are traditionally considered the GC problem, and cleaning up about to be collected objects is considered an anti-pattern. After all, what is the point of cleaning up garbage? you are just duplicating work. Indeed in a single generation collector this is a non-issue.
        But consider for a moment the possibility that one of the above nodes was promoted into the OldGen. This is something which may not happen in short lived tests, or even 'lucky' long lived tests, but is obviously quite possible. We will have:
        head -> N3 -> null
        tail -> N3 -> null
        Zombie NEW:  N1 -> N2 (-> N3)
        DEAD OLD:  N0 (-> N1 -> N2 (-> N3))

        The young are alive, as far as the old are concerned. The old are dead, but their promotion gives life to their 'children' until they get collected. To demonstrate this issue I've written the following program: 

        If you look at the GC logs, or plug in Visual VM with the Visual GC plugin you will notice the young GC is entirely ineffective. Everything gets promoted to old, until an old GC clears the lot. After the old GC the problem disappears (in the test program, but is likely to repeat in real programs). This is 'pathological' because ANY promoted node will result in the promotion of ALL following nodes until a GC resolves the issue. 

        At this point you might be shaking your head sadly and saying: "Ye thick old fart, surely this is a silly issue which NEVER happens in the real world". I've had my share of heads being shook at me and am not offended, but consider that this issue is real enough for other people:

        An Exercise To the Reader


        My plan was to walk through the way this issue has been fixed in JCTools vs Akka vs Agrona. Each solution reflecting different considerations, but sadly I'm quite short on time for a tidy summary and will give my observations which you can choose to read up on and argue with.
        All three implementations follow the algorithm outlined by D. Vyukov. Note that Agrona is ignoring the Queue interface requirement for poll(return null iff queue is empty).

        • In Akka the next reference is nulled on poll. This solves the issue, but at the cost of further weakening the accuracy of the size() method.
        • In JCTools the next reference is set to point to the discarded node. This helps in telling the difference between a removed node and the head. This piece of information is used in the size method to skip gaps in the size counted chain when racing with the consumer thread. The result is a more accurate size estimate. The change does come at a cost to the size method however as we are no longer able to rely on the chain remaining whole throughout the size operation.
        • In Agrona, the next reference is nulled as in Akka. A further effort is made to reuse a special empty node to avoid repeating promotion of transient nodes in empty queues. The effort is arguably worth while on systems with large numbers of active but mostly empty queues.

        Open questions/discussion:

        • How common is this issue for 'classic' data structures? it seems reasonable that a host of queues/stacks/tree data structures are similarly vulnerable.
        • How common is this issue for idiomatic 'user' Java code? given the tendency towards composition rather than inheritance and the drive towards smaller objects, we may argue that user object graphs often approximate trees. If we have short-lived 'trees' a parent node can certainly drag a useless chunk of state to the old gen. If a reference bridges 2 trees we may have something approximating the pathological behaviour above. I'm not aware of any studies trying to determine the impact of nepotism on user code, part of the problem would be the definition of 'typical' user code.
        • Where does the term 'nepotism' come from? I made an attempt at tracing the origins, but got nowhere. The problem seems to be documented at least 8 years ago, but the term nepotism is not used by most writing.

        The Pros and Cons of AsyncGetCallTrace Profilers

        $
        0
        0
        So, going on from my whingy post on safepoint bias, where does one go to get their profiling kicks? One option would be to use an OpenJDK internal API call AsyncGetCallTrace to facilitate non-safepoint collection of stack traces.
        AsyncGetCallTrace is NOT official JVM API. It's not a comforting place to be for profiler writers, and was only implemented on OpenJDK/Oracle JVMs originally (Zing has recently started supporting AGCT to enable support for Solaris Studio and other profilers, I will write a separate post on Studio). It's original use case was for Solaris Studio, and it provides the following API (see forte.cpp, the name is a left over from the Forte Analyzer days). Here's what the API adds up to:

        Simple right? You give it a ucontext and it fills in the trace with call frames (or it gets the hose again).

        A Lightweight Honest Profiler

        The 'async' in the name refers to the fact that AGCT is safe to call in a signal handler. This is mighty handy in a profiling API as it means you can implement your profiler as follows:
        1. In a JVMTI agent, register a signal handler for signal X.
        2. Setup a timer, triggering the signal X at the desired sample frequency. Honest Profiler is using the ITIMER_PROF option, which means we'll get signalled based on CPU time. The signal will be sent to the process and one of the running threads will end up being interrupted and calling into our signal handler. Note that this assumes the OS will distribute the signals fairly between threads so we will get a fair sample of all running threads.
        3. From signal handler, call AGCT: Note that the interrupted thread (picked at 'random' from the threads currently executing on CPU) is now running your signal handler. The thread is NOT AT A SAFEPOINT. It may not be a Java thread at all.
        4. Persist the call trace: Note that when running in a signal handler only 'async' code is legal to run. This means for instance that any blocking code is forbidden, including malloc and IO.
        5. WIN!
        The ucontext ingredient is the very same context handed to you by the signal handler (your signal handler is a callback of the signature handle(int signum, siginfo_t *info, void *context)). From it AGCT will dig up the instruction/frame/stack pointer values at the time of the interrupt and do it's best to find out where the hell you landed.
        This exact approach was followed by Jeremy Manson, who explained the infrastructure and open sourced a basic profiler (in a proof of concept, non commital sort of way). His great series of posts on the matter:
        The same code was then picked up by Richard Warburton and further improved and stabilized in Honest-Profiler (to which I have made some contributions). Honest Profiler is an effort to make that initial prototype production ready, and compliment it with some tooling to make the whole thing usable. The serialization in a signal handler issue is resolved by using a lock-free MPSC ring-buffer of pre-allocated call trace structs (pointing to preallocated call frame arrays). A post-processing thread then reads the call traces, collects some extra info (like converting BCI to line number, jmethodIds into class names/file names etc) and writes to a log file. See the projects wiki for more details.
        The log file is parsed offline (i.e. by some other process, on another machine, at some other time. Note that you can process the file while the JVM is still profiling, you need not wait for it to terminate.) to give you the hot methods/call tree breakdown. I'm going to use Honest-Profiler in this post, so if you want to try this out at home you're going to have to go and build it to experiment locally (works on OpenJDK/Zulu 6/7/8 + Oracle JVMs + recent Zing builds). I'll do some comparisons for the same experiments with JMC, you'll need an Oracle JVM (1.7u40 and later) to try that out.

        What Does AGCT Do?

        By API we can say AGCT is a mapping between instruction/frame/stack pointer and a call trace. The call trace is an array of Java call frames (jmethodId, BCI). To produce this mapping the following process is followed:
        1. Make sure thread is in 'walkable' state, in particular not when:
        • Thread is not a Java thread.
        • GC is active
        • New/uninitialized/just about to die. I.e. threads that are either before or after having Java code running on them are of no interest.
        • During a deopt
      1. Find the current/last Java frame (as in actual frame on the stack, revisit Operating Systems 101 for definitions of stacks and frames):
        • The instruction pointer (commonly referred to as the PC - Program Counter) is used to look up a matching Java method (compiled/interpreter). The current PC is provided by the signal context.
        • If the PC is not in a Java method we need to find the last Java method calling into native code.
        • Failure is an option! we may be in an 'unwalkable' frame for all sorts of reasons... This is quite complex and if you must know I urge you to get comfy and dive into the maze of relevant code. Trying to qualify the top frame is where most of the complexity is for AGCT.
      2. Once we have a top frame we can fill the call trace. To do this we must convert the real frame and PC into:
        • Compiled call frames: The PC landed on a compiled method, find the BCI (Byte Code Index) and record it and the jMethodId
        • Virtual call frames: The PC landed on an instruction from a compiled inlined method, record the methods/BCIs all the way up to the framing compiled method
        • Interpreted call frames
        • From a compiled/interpreted method we need to walk to the calling frame and repeat until we make it to the root of the Java call trace (or record enough call frames, whichever comes first)
      3. WIN!
      4. Much like medication list of potential side effects, the error code list supported by a function can be very telling. AGCT supports the following reasons for not returning a call trace:

        While this data is reported from AGCT, it is often missing from the reports based on it. More on that later.

        The Good News!

        So, looking at the bright side, we can now see between safepoint polls!!! How awesome is that? Lets see exactly how awesome by running the benchmarks which we could not measure correctly with the safepoint biased profiler in the previous post.
        [Note: Honest-Profiler reports (t X,s Y) which reflect the total % of stack trace samples containing this method+line vs. the self % of samples in which this method+line is the leaf. The output is sorted by self.]:

        Sweet! Honest Profiler is naming the right methods as the hot methods, and nailing the right lines where the work is happening.
        Let's try the copy benchmark, this time comparing the Honest-Profiler result with the JMC result:
        Note the difference in reporting switching on the "-XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints" flags makes to JMC. Honest-Profiler now (recent change, hot off the press) causes the flag to be on without requiring user intervention, but JMC does not currently do that (this is understandable when attaching after the process started, but should perhaps be default if the process is started with recording on).
        Further experimentation and validation exercises for the young and enthusiastic:
        • Compare the Honest-Profiler results with JFR/JMC results on larger applications. What's different? Why do you think it's different?
        • Run with -XX:+PrintGCApplicationStoppedTime to see no extra safepoints are caused by the profiling (test with profiler on/off).
        • The extra-studious of y'all can profile the other cases discussed in previous post to see they similarly get resolved.
        While this is a step forward, we still got some issues here... 

        Collection errors: Runtime Stubs

        As we can see from the list of errors, any number of events can result in a failed sample. A particularly nasty type of failed sample is when you hit a runtime generated routine which AGCT fails to climb out of. Let see what happens when we don't roll our own array copy:


        Now, this great unknown marker is a recent addition to Honest-Profiler, which increases it's honesty about failed samples. It indicates that for 62.9% of all samples, AGCT could not figure out what was happening and returned ticks_unknown_java. Given that there's very little code under test here we can deduce the missing samples all fall within System.arrayCopy (you can pick a larger array size to further prove the point, or use a native profiler for comparison).
        Profiling the same benchmarks under JMC will not report failed samples, and the profiler will divide the remaining samples as if the failed samples never happened. Here's the JMC profile for systemArrayCopy:

        Number of samples(over 60 seconds) : 2617
        Method::Line                              Samples   %
        systemArrayCopy_avgt_jmhStub(...) :: 165    1,729   66.043
        systemArrayCopy()                 ::  41      331   12.643
        systemArrayCopy()                 ::  42      208    7.945
        systemArrayCopy_avgt_jmhStub(...) :: 166       93    3.552
        systemArrayCopy_avgt_jmhStub(...) :: 163       88    3.361

        JMC is reporting a low number of samples (the total is sort of available in the tree view as the number of samples in the root), but without knowing what the expected number of samples should be this is very easy to miss. This is particularly true for larger and noisier samples from real applications collected over longer period of time.
        Is this phenomena isolated to System.arrayCopy? Not at all, here's crc32 and adler32 as a further comparison:


        What do Crc32 and System.arrayCopy have in common? They are both JVM intrinsics, replacing a method call (Java/native don't really matter, though in this case both are native) with a combination of inlined code and a call to a JVM runtime generated method. This method call is not guarded the same way a normal call into native methods is and thus the AGCT stack walking is broken.
        Why is this important? The reason these methods are worth making into intrinsics is because they are sufficiently important bottlenecks in common applications. Intrinsics are like tombstones for past performance issues in this sense, and while the result is faster than before, the cost is unlikely to be completely gone.
        Why did I pick CRC32? because I recently spent some time profiling Cassandra. Version 2 of Cassandra uses adler32 for checksum, while version 3 uses crc32. As we can see from the results above, it's potentially a good choice, but if you were to profile Cassandra 3 it would look better than it actually is because all the checksum samples are unprofilable. Profiling with a native profiler will confirm that the checksum cost is still a prominent element of the profile (of the particular setup/benchmark I was running).
        AGCT profilers are blind to runtime stubs (some or all, this may get fixed in future releases...). Failed samples are an indication of such a blind spot.

        Exercise to the reader:

        • Construct a benchmark with heavy GC activity and profile. The CPU spent on GC will be absent from you JMC profile, and should show as ticks_GC_active in your Honest-Profiler profile.
        • Construct a benchmark with heavy compiler activity and profile. As above look for compilation CPU. There's no compilation error code, but you should see allot of ticks_unknown_not_Java which indicate a non-Java thread has been interrupted (this is a conflated error code, we'll be fixing it soon).
        • Extra points! Construct a benchmark which is spending significant time deoptimising and look for the ticks_deopt error in your Honest-Profiler profile.

        Blind Spot: Sleeping code

        Since sleeping code doesn't actually consume CPU it will not appear in the profile. This can be confusing if you are used to the JVisualVM style reporting where all stacks are reported including WAITING (or sometimes RUNNING Unsafe.park). Profiling sleeping or blocked code is not something these profilers cover. I mention this not because they promise to do this and somehow fail to deliver, but because it's a common pitfall. For Honest-Profiler this feature should help highlight mostly off-CPU applications as such by comparing expected and actual samples (the delta is signals which were not delivered to the process as it was not running).


        Error Margin: Skidding + inlining

        I'll dive deeper into this one another time, since other profilers share this issue. In essence the problem is that instruction profiling is not accurate and we can often encounter a skid effect when sampling the PC(program counter). This in effect means the instruction reported can be a number of instructions after the instruction where the time is spent. Since AsyncGetCallTrace relies on PC sampling to resolve the method and code line we get the same inaccuracy reflected through the layer of mapping (PC -> BCI -> LOC) but where on the assembly level we deal with a single blob of instructions where the proximity is meaningful, in converting back to Java we have perhaps slipped from one inlined method to the next and the culprit is no longer in a line of code nearby.
        To be continued...
        But seriously guys, we should be using Fortran for this

        Summary: What is it good for?

        AsyncGetCallTrace is a step up from GetStackTraces as it operates at lower overheads and does not suffer from safepoint bias. It does require some mental model adjustments:
        1. -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints : if this is not on you are still hobbled by the resolution of the debug data. This is different from safepoint bias, since you actually can sample anywhere, but the translation from PC to BCI kills you. I've not seen these flags result in measurable overhead, so I'm not convinced the default value is right here, but for now this is how it is. Honest-Profiler takes care of this for you, with JMC you'll need to add it to your command line.
        2. Each sample is for a single, on CPU, thread: This is very different from the GetStackTraces approach which sampled all threads. It means you are getting less traces per sample, and are completely blind to sleeping/starved threads. Because the overhead is that much lower you can compensate by sampling more frequently, or over longer periods of time. This is a good thing, sampling all the threads at every sample is a very problematic proposition given the number of threads can be very large.
        AsyncGetCallTrace is great for profiling most 'normal' Java code, where hotspots are in your Java code, or rather in the assembly code they result in. This seems to hold in the face of most optimizations to reasonable accuracy (but may on occasion be off by quite a bit...).

        AsyncGetCallTrace is of limited use when:
        1. Large numbers of samples fail: This can mean the application is spending it's time in GC/Deopts/Runtime code. Watch for failures. I think currently honest-profiler offers better visibility on this, but I'm sure the good folks of JMC can take a hint.
        2. Performance issue is hard to glean from the Java code. E.g. see the issue discussed in a previous post using JMH perfasm (false sharing on the class id in an object header making the conditional inlining of an interface call very expensive). 
        3. Due to instruction skid/compilation/available debug information the wrong Java line is blamed. This is potentially very confusing in the presence of inlining and code motion.
        Using Honest-Profiler you can now profile Open/Oracle JDK6/7/8 applications on Linux and OS X. You can also use it to profile Zing applications on recent Zing builds (version 15.05.0.0 and later, all JDKs). Honest-Profiler is lovely, but I would caution readers that it is not in wide use, may contain bugs, and should be used with care. It's a useful tool, but I'm not sure I'd unleash it on my production systems just yet ;-).
        JMC/JFR is available on Oracle JVMs only from JDK7u40, but works on Linux, OS X, Windows and Solaris (JFR only). JMC/JFR is free for development purposes, but requires a licence to use in production. Note that JFR collects a wealth of performance data which is beyond the scope of this post, I whole heartedly recommend you give it a go.

        A big thank you to all the kind reviewers: JP Bempel, Doug Lawrie, Marcus Hirt and Richard Warburton, any remaining errors will be deducted from their bonuses directly.

        Fixing Coordinated Omission in Cassandra Stress

        $
        0
        0
        Copyright © 2016 Apache Software Foundation
        I have it from reliable sources that incorrectly measuring latency can lead to losing ones job, loved ones, will to live and control of bowel movements. Out of my great love for fellow humans I have fixed the YCSB load generator to avoid the grave danger that is Coordinated Omission, this was met with great joy and I now gone on to fixup Cassandra Stress in similar vein. The fix is now merged into trunk, so expect to have it with the next Cassandra release (or just build it from source NOW).
        Before we start on the how and what, let us take a step back and consider the why:
        • Coordinated Omission is a term coined by my esteemed colleague Gil Tene to describe a common measurement anti pattern where the measuring system inadvertently coordinates with the system under measurement (watch Gil's How NOT to Measure Latency talk). This results in greatly skewed measurements in commonly used load generators, and has led to long discussions on Mechanical Sympathy mailing list etc. I've given my own explanation in a previous post, so go and have a read if you need a refresher on the subject.
        • Cassandra Stress (CS) is a tool which comes bundled with Cassandra to enable load testing of a given cluster. It allows the user to specify their own queries and schemas and is the prefered tool for Cassandra benchmarking as it gives better access to Cassandra features etc. CS allows 2 testing modes: throughput(default) or 'limit' where a given number of operations per second are thrown at the cluster (the fix discussed here does away with limit and replaces it with throttle/fixed, read on).
        So coordinated omission is bad, and it is now fixed in Cassandra Stress. This post talks a bit on motivation (part I), a bit on implementation (part II) and a bit on what you can do with the new features (part III). Feel free to skip any and all parts, god knows those selfies don't take themselves.

        PART I: Demonstrating The Issue

        CS default mode is to hit the system under test as hard as it can. This is a common strategy in load generators and can give system writers an interesting edge case to work with. I run the benchmark on my laptop (no attempt at finding out real performance numbers here, I just care about measurement issue) with the example provided workload I can saturate my Cassandra server (I only gave it a single core to run on) pretty easily. CS tells me the following about my miserable little server:
        INFO  15:02:46 Results:
        INFO  15:02:46 Op rate                   :    5,055 op/s  [insert: 500 op/s, simple1: 4,556 op/s]
        INFO  15:02:46 Partition rate            :   17,294 pk/s  [insert: 12,739 pk/s, simple1: 4,556 pk/s]
        INFO  15:02:46 Row rate                  :  150,266 row/s [insert: 12,739 row/s, simple1: 137,527 row/s]
        INFO  15:02:46 Latency mean              :    4.5 ms [insert: 7.5 ms, simple1: 4.2 ms]
        INFO  15:02:46 Latency median            :    3.0 ms [insert: 5.4 ms, simple1: 2.8 ms]
        INFO  15:02:46 Latency 95th percentile   :   13.1 ms [insert: 20.1 ms, simple1: 11.9 ms]
        INFO  15:02:46 Latency 99th percentile   :   23.8 ms [insert: 33.7 ms, simple1: 21.8 ms]
        INFO  15:02:46 Latency 99.9th percentile :   49.8 ms [insert: 55.0 ms, simple1: 49.0 ms]
        INFO  15:02:46 Latency max               :  603.5 ms [insert: 598.7 ms, simple1: 603.5 ms]
        INFO  15:02:46 Total partitions          :  1,000,000 [insert: 736,585, simple1: 263,415]
        INFO  15:02:46 Total errors              :          0 [insert: 0, simple1: 0]
        INFO  15:02:46 Total GC count            : 112
        INFO  15:02:46 Total GC memory           : 34.850 GiB
        INFO  15:02:46 Total GC time             :    1.0 seconds
        INFO  15:02:46 Avg GC time               :    9.4 ms
        INFO  15:02:46 StdDev GC time            :   16.6 ms
        INFO  15:02:46 Total operation time      : 00:00:57
        With the other 3 cores on my laptop hitting it as hard as they could the median latency on this maxed out server was 3ms. That is pretty awesome. But also, it makes no sense.
        How can a maxed out server have a typical response time of 3ms? In reality when servers are maxed out they are unresponsive, the typical response time becomes worse as the load increases. What CS is reporting however is not response time. It is 'latency'. Latency is one of those terms people use to mean many things and in this case in particular it does not mean "response time" but rather "service time". Here's a definition of more specific terms to describe system responsiveness(see wiki on response time):
        • Response time: The time between the submission of a request and the completion of the response.
        • Service time: The time between the initiation and completion of the response to a request.
        • Wait time: The time between the submission of the request and initiation of the response.
        In an all out test one could argue we want all the results as soon as possible, and given a magic load generator they would all launch instantaneously at the benchmark start. This will mean submission time for all requests is at the beginning of the run. Naturally the server will not be able to respond instantly to all requests and can only allow so many requests to be handled in parallel. If the max throughput is 5000 ops/sec, and we are measuring for 100,000 ops it would mean 95K ops have waited a second, so their response time would be at least 1 second (response time > wait time). By the end of the run we would have 5K ops which have patiently waited at least 19 seconds (so 99%ile should be at least 19 seconds).
        It follows that in an all out throughput benchmark response time is terrible by definition, and completely uninformative. It also follows that we should not expect the 'latency' above to be at all indicative of the sort of response time we would get from this server.
        The alternative to an all out throughput benchmark is a Responsiveness Under Load (RUL) benchmark. Using Cassandra Stress one can (or rather they could before this fix went in) use the '-rate limit=17000/s' option to benchmark under a load of 17k pks/sec.(pks = partition keys, each operation costs X keys, throughput limit is specified in pks not ops) Running this gives me a warm fuzzy feeling, now for sure I shall get a glimpse of the response time at max throughput:
        INFO  08:03:54 Results:
        INFO  08:03:54 Op rate                   :    3,712 op/s  [insert: 369 op/s, simple1: 3,343 op/s]
        INFO  08:03:54 Partition rate            :   12,795 pk/s  [insert: 9,452 pk/s, simple1: 3,343 pk/s]
        INFO  08:03:54 Row rate                  :  110,365 row/s [insert: 9,452 row/s, simple1: 100,913 row/s]
        INFO  08:03:54 Latency mean              :    1.0 ms [insert: 1.6 ms, simple1: 0.9 ms]
        INFO  08:03:54 Latency median            :    0.7 ms [insert: 1.3 ms, simple1: 0.7 ms]
        INFO  08:03:54 Latency 95th percentile   :    2.2 ms [insert: 3.4 ms, simple1: 2.0 ms]
        INFO  08:03:54 Latency 99th percentile   :    4.6 ms [insert: 7.4 ms, simple1: 4.1 ms]
        INFO  08:03:54 Latency 99.9th percentile :   13.4 ms [insert: 23.8 ms, simple1: 12.1 ms]
        INFO  08:03:54 Latency max               :   63.9 ms [insert: 59.9 ms, simple1: 63.9 ms]
        INFO  08:03:54 Total partitions          :    300,000 [insert: 221,621, simple1: 78,379]
        INFO  08:03:54 Total errors              :          0 [insert: 0, simple1: 0]
        INFO  08:03:54 Total GC count            : 33
        INFO  08:03:54 Total GC memory           : 10.270 GiB
        INFO  08:03:54 Total GC time             :    0.2 seconds
        INFO  08:03:54 Avg GC time               :    7.5 ms
        INFO  08:03:54 StdDev GC time            :    2.5 ms
        INFO  08:03:54 Total operation time      : 00:00:23
        This seems nice, and if I were not a suspicious man I might accept it. The thing is, I asked for 17k pks per second, but I only got  12,795 pk/s so obviously the server could not meet the implied schedule. If it could not meet the schedule, response time should be terrible. But it's not, it's very similar to the result we got above. Because? Because again, latency here means service time and not response time. While response time is not informative in an all out test, in an RUL benchmark it is the whole purpose of the benchmark. I have a schedule in mind, requests come at a particular rate, which implies they have a known start time (request n will start at: T0 + n/rate, T0 being the start of the test). This is coordinated omission, lets fix it.

        Part II(a): It Puts The Measurements In The HdrHistogram or It Gets The Hose Again!

        First off, I like to have HdrHistogram files to work with when looking at latency data (made a whole post about it too). They are usually tons better than whatever it is people hand roll, and they come with a bunch of tooling that makes data post processing easy. Importantly HdrHistograms are loss-less, configurable compact and support loss-less and compressed logging. Combine that with the high resolution of data and you have a great basis for post run analysis.
        Cassandra Stress had it own sampling latency collection mechanism which would randomly drop some samples as a means to improve memory footprint, so the replacement improved reporting accuracy and reduced the amount of code. A side effect of this change is that Cassandra perf regression testing stability has improved rather dramatically. As indicated by this graph:

        Because of the random sampling 99.9%ile reporting was unstable before the patch went in (May 28th), but smoooooth ever since. Ain't that nice?
        Here's the commit with the main changes on the branch that became the PR, there were further changes introduced to enable histogram logging to files. The change is mostly tedious, but importantly it introduces the notion of split response/service/wait histograms. The data is recorded as follows:
        The question is, what is the intended start time?

        Part II(b): Where Do I Start?

        Load generators, when not going all out, generally have some notion of schedule. It is more often than not quite simply a fixed rate of requests, though the notion of schedule holds regardless of how you come up with it. A schedule in this context means that for any event X there is a time when it should happen: st(X). That time is easily computed in a fixed rate schedule as: "st(n) = T0 + n/rate". Cassandra Stress however was using google's RateLimiter to provide it with the scheduling, and while battle tested and handy it does not have a notion of schedule. The replacement took place in 2 steps.
        First I refactored the existing code into hiding the details of how operations are scheduled and where they come from behind a blocking queue like interface. The next step was to support a fixed rate stream of operations where the intended schedule is available so I can use it. This is what I ended up with (further tweaked to only start the clock when all the consumer initialization has happened):

        Now we're all set to benchmark with no Coordinated Omission!

        Part II(c): Who Reports What?

        The measurement collection now all set up we face the question of what should different load generating modes report. Since it turned out that 'limit' was a confusing name (hard limit? upper limit?) it was decided to replace it with 2 distinct modes, adding to a total of 3 modes:
        • Throughput: latency == service time, response/wait time not recorded. Maximum throughput is an important test mode for flushing out CPU bottlenecks and contention. It may help in exposing IO configuration issues as well. It is not however a good predictor of response time distribution as no attempt is made to estimate independent request timing (i.e uncoordinated measurement). The maximum throughput number is valuable to batch processing etc, but I'd caution against using it for the sizing of a responsive system. If a system has responsiveness SLAs it is better to use the 'fixed' test mode and run for significant periods to determine the load under which response times SLA can be met.
        •  Throttle ("-rate throttle=1000/s"): latency == service time, response/wait time recorded like in fixed. This mode is a compromise for people who liked the old 'limit' measurement and want to measure "service time under attempted load". The partitions/second parameter is attempted for, but the summary does not reflect the response time. If you like, this is a way to sneakily record the response time while keeping the summary as it is so people are not so startled by the difference. I don't see myself using it, but have a blast.
        • Fixed  ("-rate fixed=1000/s"): latency == response time, service/response/wait time recorded. This mode is for benchmarking response time under load. The partitions/second parameter is used to determine the fixed rate scheduled of the requests. The hdr log can be used to visualize latencies over time or aggregate latencies for any segment down to the logging frequency. The logs contain response/wait/ service components that can be extracted and handled separately.
        In all of the above you can choose to record the HDR histograms to a file on a fixed interval. The interval is the one used for summary reporting. To enable histogram logging use: " -log interval=10s hdrfile=cs.hdr"
        Note that the interval should be quite long, and the default of 1s is not actually achievable by Cassandra Stress at the moment. This is down to the terrible way the reporting thread synchronizes with the load generation threads, and while it is one my wish list ("I wish I had time to fix it...") it was outside the scope of fixing CO, so lived to taunt us another day. I've settled on 10 second intervals for now, you can have even longer ones, all depends on the sort of granularity you want in your reports.

        Part III: What Is It Good For?

        So we got 3 load generating modes, and we've broken up latency into 3 components, the numbers are in histograms, the histograms are in the logs... let's have a little worked example.
        I run a Cassandra 2.1 node on my laptop (old software, bad hardware... I don't care. This is not about absolute numbers, it's about the measurement features). To run this please:
        1. Clone the Cassandra trunk: git clone https://github.com/apache/cassandra.git
        2. Build Cassandra and Cassandra Stress: ant clean build stress-build jar

        I use the example workload provided by the good Cassandra folks, and start with an all out stress test from cassandra/tools/bin:
        $ ./cassandra-stress user profile=../cqlstress-example.yaml duration=60s ops\(insert=1,simple1=9\) -mode native cql3 protocolVersion=2 -rate threads=3  -log interval=10s hdrfile=throughput.hdr
        Joy, my laptop is giving me awesome throughput of 13,803 pk/s. The summary itself is pretty informative for throughput runs what do we win with the HDR log?
        The log we produce can be summarized, manipulated and graphed by a bunch of utilities. Here's what the log looks like (this is from the fixed mode run):

        Note that while we logged our latency in nanoseconds, the max column is in milliseconds. The nanosecond level measurements are still available in the compressed histogram to the right. Sadly it's not very friendly on the eye. HdrHistogram does include a log processing utility, but it offers quite basic facilities. I've put together a few utilities for histogram log management in HdrLogProcessing. These allow you to split, union and summarize logs and work with the tags feature. Lets make them into handy aliases:

        • hdrsum=java -jar HdrLogProcessing-1.0-SNAPSHOT-jar-with-dependencies.jar SummarizeHistogramLogs
        • hdrspl=java -jar HdrLogProcessing-1.0-SNAPSHOT-jar-with-dependencies.jar SplitHistogramLogs
        • hdruni=java -jar HdrLogProcessing-1.0-SNAPSHOT-jar-with-dependencies.jar UnionHistogramLogs

        The throughput.hdr file we requested uses a recently added HdrHistogram feature which allows for the tagging of different histograms in one log. This makes it easy for applications logging histograms for different measurements to shove them all into a single file rather than many. Since we want to track 2 different operations with their separate response/service/wait times (so up to 6 files) this is quite handy. We can start by using the all in one HdrLogProcessing summary utility to add up the histogram data. The default mode of summary is percentiles and will produce a summary very similar to the one produced above by Cassandra Stress (using the parameters "-if throughput.hdr -ovr 1000000" to summarize in milliseconds):

        We can have the microsecond or nanosecond report by tweaking the outputValueUnitRatio(ovr). We are also free to ignore tags and create an all inclusive summary by specifying the ignoreTag(it) parameter. Used together to create a total summary in microseconds we get ("-if throughput.hdr -ovr 1000 -it):

        As a bonus, the summary tool allows us to process multiple files into the same result (logs from multiple benchmark runs for instance) and selecting periods within the logs to include.
        The summary tool also supports generating the HGRM format which allows us to produce the following graph (using "-if throughput.hdr -ovr 1000 -st hgrm -of tpt" and the google charts provided in HdrHistogram):



        Now imagine we used 3 different machines to stress a single Cassandra node. Because the logs are additive and loss less there's no issue with using the union tool to aggregate them all into a single log and process it as such. Similarly you can use the splitting tool to split out the operation you are interested in and manipulate it in isolation. Indeed the sky is the limit.
        Now, with a stunning 13K partitions per second, an a 50ms 99.99%ile I might think it a reasonable idea to say my server can safely sustain a 10K pk/s rate and call it a day. I shall test this adventurous little estimate using the throttle mode:
        $ ./cassandra-stress user profile=../cqlstress-example.yaml duration=60s ops\(insert=1,simple1=9\) -mode native cql3 protocolVersion=2 -rate throttle=10000/s threads=3 -log interval=10s hdrfile=throttle.hdr


        There according to this, we should have no trouble at all with 10K pk/s. I mean sure it's a bit close to the SLA, but surely no too bad? The throttle mode is in this case just a way to keep your head in the sand a bit longer, but it does record the response time in case you feel like maybe comparing them. Let's compare the service time histogram and the response time histogram for this run. To get both operations service time histograms I need to play with the log processing a bit:

        1. Split the throttle.hdr file:"hdrspl -if throttle.hdr" -> This results in 6 different files <op>-<rt/st/wt>.throttle.hdr.
        2. Summarize the service time histograms:"hdrsum -if .*.-st.throttle.hdr -ovr 1000 -it -st hgrm -of throttle-st" -> we get throttle-st.hgrm
        3. Summarize the service time histograms:"hdrsum -if .*.-rt.throttle.hdr -ovr 1000 -it -st hgrm -of throttle-rt" -> we get throttle-rt.hgrm
        4. Load them into the google charts provided in HdrHistogram.
        Take the red pill and you can keep your estimate, take the blue pill and you might keep your job.
        This is a fine demonstration of just how far from schedule operations can get without service time measurement registering an issue. In other words the red line, service time is the measurement with coordinated-omission, the blue is measurement which does not suffer from it.
        If you are finally convinced that you should be looking at response time instead of service time you can skip the throttle mode and move on straight to the fixed mode. The difference is only in what view of the world you want to see in your summary.
        Finally, you can take the histogram logs and graph the latencies over time using HdrHistogramVisualizer. It currently does not handle tags well, so you'll need to split your logs first, but after that you can generate graphs as funky as this one (this is a longer run with fixed 10K rate, plotting the maximum latency for inserts at the top):
        $ ./cassandra-stress user profile=../cqlstress-example.yaml duration=360s ops\(insert=1,simple1=9\) -mode native cql3 protocolVersion=2 -rate fixed=10000/s threads=3 -log interval=10s hdrfile=fixed.hdr
        $ hdrspl -if fixed.hdr


        This tells an interesting story, it seems as if the server was coping alright with the load to begin with, but hit a bump (GC? compaction?) after 225 seconds, and an even larger bump slightly later. That's lovely to know. I'm not much of a UI wiz and I'm sure some readers out there can make some valuable PRs to this project ;-)

        Summary: What's new?

        Given that with recent version of Cassandra Stress you can now test older versions of Cassandra as well (using the protocolVersion parameter as demonstrated above), you can stop using your crummy old version of Cassandra Stress, build trunk from source to get the following benefits:
        • HdrHistogram latency capturing and logging. Set the following options to get your very own histogram log: "-log interval=10s hdrfile=myawesomelog.hdr".
        • Have a look at the HdrLogProcessing project for some handy utilities to help you slice and dice the data. They are pretty simple and you can feel free to build/contribute your own.
        • 2 new load generation modes: throttle and fixed now replace the old limit. Use fixed to get a view on your cluster's response time under attempted constant load.
        With HdrHistogram support now available in many languages, you can easily build post processing utilities for analysis in a language of your choice.
        Have fun storming the castle!

        Many many thanks to Jake Luciany of the Apache Cassandra project for reviewing the PR, helping to shape it, and merging it it. Jake is an Open Source hero, buy that man a drink on sight!
        Both Jake and Chris Batey reviewed this post, if any errors remain it is down to their disgustingly sloppy and unprofessional ways, please let me know and I shall have words with their parents.

        4 Years Blog Anniversary

        $
        0
        0

        This has been a very slow year blogging wise... too much work, travel, and family. 4 years ago I aimed for 2 posts a month. I nearly hit it in 2013, failed by a little in 2014, and went on to not meeting at all the last 2 years...
        Coincidently I also have a delightfully busy 2.5 year old running around, I suspect she has something to do with how little time I have. My eldest has developed a terrible addiction to reading so I hardly see her these days, can't be blame for this one.
        Since I started the blog I've also embarked on developing and maintaining JCTools, speaking at conferences, and starting a local JUG in Cape Town. Sharing of one kind breeds another, but you don't seem to gain bonus hours in the day. Hmmm.
        If I sound like I'm making excuses it's because I'm still feeling bad about missing that 2 posts a month goal.
        Despite my decreased output, blog traffic has steadily increased teaching me a thing or 2 about search and web economics. I hope to write more in the next year as I plan to give conferences a rest for the next 6-8 months, will see how it goes.
        In any case, thanks for reading :-)

        Linked Array Queues, part 1: SPSC

        $
        0
        0
        When considering concurrent queues people often go for either:
        1. An array backed queue (circular array/ring buffer etc.)
        2. A linked list queue
        The trade off in the Java world seems to be that array backed queues offer better throughput, but are always bounded and allocated upfront, and linked queues offer smaller footprint when empty, but worse throughput and higher overhead per element when full. Linked queues are also often unbounded.
        In JCTools we have the above duality, but in later versions introduced a hybrid queue which attempts to offer a middle ground between the 2 extremes above. This hybrid is the linked array queue:
        1. Queue is backed by one or more arrays of references.
        2. As long as the queue can fit in a single array no further arrays are allocated.
        3. When empty the queue can naturally shrink to a single backing array.
        For the SPSC case this has already been done in C++ Fast Flow with their uSPSC queues. In Java there are no other implementations that I know of (please let me know if I missed anything).
        In implementing these queues I have relied heavily on the feedback and support of @akarnokd@pcholakov and others who contributed fixes, filed bugs, and so forth. Thanks guys!
        3 variations on linked array queues have been explored in JCTools:
        1. Chunked: Each backing array is a fixed chunk size. The queue is bounded.
        2. Unbounded: Each backing array is a fixed chunk size. The queue is unbounded.
        3. Growable: Chunk size doubles every time until the full blown backing array is used. The queue is bounded.

        Hot Path offer/poll

        The queues all share the same polling logic and on the fast path share the same offer logic as well:
        If you've not read JCTools code before, or maybe you've forgotten, here's the convention:
        • sv/lvXXX: Store/Load Volatile, same as a volatile field write/read
        • sp/lpXXX: Store/Load Plain, same as a normal field write/read
        • soXXX: Store Ordered, same as an AtomicXXX.lazySet
        This code will need reconsidering in a post-JDK9 world, some other time perhaps.
        So what's the deal:
        • As long as we are not passed the producerLimit, keep writing.
          • If we have passed it go to slow path (where the money be)
        • As long as there's stuff in the queue, read it.
          • Unless it's JUMP, in which case read through to next array.
        The general idea here being that the common case is small and simple. This should have the following effects:
        1. offer/poll code is small enough to inline when hot.
        2. offerColdPath/newBufferPoll are set up to either not inline or, when inlined be out of band code blocks. This should keep size on the hot path small and help the compiler focus on more important things.
        3. offer/poll should perform similar to the SpscArrayQueue in the common case.
        NOTE: The producer publishes element then index, but the consumer does the reverse. This ensures a consistent view on the consumer side where the following assertion must hold:
          !queue.isEmpty() => queue.poll() != null

        NOTE: In some early versions the new array was entered instead of the JUMP constant marker. This required an instanceof check for each element loaded and a compromise to either not allow Object[] to be an element of the queue or introduce some wrapper class. Comparing to a constant turned out to be much simpler and faster.

          Cold Path poll

          The consumer has hit a JUMP, which indicates the producer has linked a new array to this one. The new array is the last element of the current array. We go to newBufferPoll:

          The consumer therefore has to:
          1. Load new array from the last element of old array.
          2. Null out reference from old to new (prevent nepotism).
          3. Adjust consumer view on buffer and mask.
          4. Poll (remove element and progress counter) from new buffer.
          Note that peek similarly refreshes view of buffer on hitting the JUMP marker. This goes against the standard spirit on peek which is a view only operation.

          Cold Path offer: Unbounded

          This method is implemented differently for each of the use cases, unbounded is the simplest:
          In the unbounded case we care only about our ability to make progress inside the current buffer:

          1. Probe inside buffer to see if we have 'many' elements to go before full. If buffer is mostly empty (this is the case for most applications most of the time), than we have successfully saved ourselves loading elements from the queue before writing in. A successful probe will update the producer limit and write to the queue.
          2. Probe failed, we check if we can write a single element. Remember we always need one free slot to JUMP with, so we look at the slot following the one we are on. If the next slot is empty we write to the current one, but we don't update the limit.
          3. A single slot remains in the buffer. We allocate a new buffer and link to it.
          This is not a large departure from the SpscArrayQueue, I leave the comparison as an exercise to the delightful reader.

          Cold Path offer: Chunked

          With chunked linked array queues we have a fixed chunk size, but an overall bound on the size. This complicates matters because on top of the buffer level limits put on the producer we must also consider the queue level limitation. In particular there might be space available in the current producer buffer, while the queue is in fact full. Here's the implementation:
          Similar to the above but the difference lies in negotiating the buffer vs. queue limit.

          Cold Path offer: Growable

          The growable queue is similar in spirit to an ArrayList as it doubles it's backing array capacity when a buffer is full. This adds an interesting bit of information to the game, since:
          1. If we are not on the last buffer, we've not hit the queue limit,
          2. If we're on the last buffer, and so is the consumer, we can revert to just checking for null in the buffer.
          3. If we're on the last buffer, and the consumer isn't, we need to hang tight and let it pass. It's a temporary state.
          The code for handling this is rather ugly and long, but since you've put up with the rest:
          The lookAheadStep is dynamically adjusted as the buffer grows, and also acts as an indicator for the transition period which the producer is on the last buffer and the consumer is trailing. It's a mess to look at, but sadly performs better than a neater alternative which builds on the Chunked variant... General idea:

          1. lookAheadStep is positive => we are either not on last buffer, or on  it for both consumer and producer => it is enough to consider the elements in the producer buffer to determine if the queue is full. In particular if the buffer is full then we must resize unless we are on the last buffer in which case we are full. Note that we allow using the last buffer to the last element, since we don't need a JUMP anymore.
          2. lookAheadStep is negative => we are waiting for consumer to get to the last buffer. We use the lookAheadStep to indicate how far behind the consumer is.
          It's not complex, just messy, and if you got an elegant representation please ping me with your suggestions.

          Performance?

          GODDAMN it! this is the time consuming part! I've benchmarked on a few setups, but not kept track or clear records. I'll need to do it all again, might as well be another post since nobody reads this far into these things anyhow. La la la la la, performance is great, la la la la la, if I look now I might be disappointed, la la la la la, this is better than benchmarking, la la la la la.

          TO BE CONTINUED...

          Linked Array Queues, part 2: SPSC Benchmarks

          $
          0
          0
          JCTools has a bunch of benchmarks we use to stress test the queues and evaluate optimizations.
          These are of course not 'real' workloads, but serve to highlight imperfections and opportunities. While it is true that an optimization might work in a benchmark but not in the real world, a benchmark can work as a demonstration that there are at least circumstances in which it does work. All measurement is imperfect, but not as imperfect as claims made with no fucking evidence whatsoever, so here goes.
          How do these linked-array queues fare in the benchmarks? what can we learn here?
          The linked array queues are a hybrid of the array and linked queues. So it seems reasonable that we should compare them to both SpscArrayQueue and SpscLinkedQueue. We should also consider how the queues differ and see if we can flush out the differences via the benchmarks.
          If you crack under the pressure of boring details, skip to the summary, do not stop at interlude, do not collect a cool drink or get praise, just be on yer fuckin' merry way.

          Setup:

          Benchmarks are run on a quiet server class machine:
          • Xeon processor(Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz): 2 CPUs x 12 cores x 2 threads (HT)
          • CentOS
          • Oracle JDK8u101
          • All benchmarks are run taskset to cores on the same numa node, but such that threads cannot share the same physical core.
          • Turbo boost is off, the scaling governor is userspace and the frequency is fixed.
          • The code is on github

          Throughput benchmark: background and method

          A throughput benchmark for queues is a tricky fucker. In particular the results change meaning depending on the balance between consumer and producer:
          • If the consumer is faster than the producer we are measuring empty queue contention (producer/consumer hitting the same cache line for elements in the queue, perhaps sampling each other index). Empty queues are the expected state for responsive applications.
          • If the producer is faster than the consumer we are measuring full queue contention, which may have similar issues. For some queues which optimize for the healthy assumption that queues are mostly empty this may be a particularly bad place to be.
          • If the producer and consumer are well balanced we are testing a streaming use case which offers the most opportunities for progress for both consumer and producer. This should yield the best performance, but for most applications may not be a realistic scenario at all.
          The JCTools throughput benchmark does not resolve these issues. It does however report results which give us an idea of poll/offer failure rates which are in turn indicative of which state we find ourselves in.
          A further challenge in managed runtime environments, which is unrelated to queues, is that garbage generating benchmarks will have GC state accumulate across measurement iterations. The implication is that each iteration is measuring from a different starting state. Naturally occurring GCs will leave the heap in varying states depending on the point at which they hit. We can choose to either embrace the noise in the measurement as an averaging of the cost/overhead of garbage or allocate a large enough heap to accommodate a single iteration worth of allocation and force a full GC per iteration, thus resetting the state per iteration. The benchmarks below were run with 8g heap and a GC cycle between iterations.
          The benchmark I run here is the no backoff version of the throughput benchmark where failure to offer/poll makes no attempt at waiting/yielding/tapping of foot and just tries again straight away. This serves to maximize contention and is not a recipe for happiness in real applications.
          JMH parameters common to all runs below:
          • -gc true -> GC cycle between iterations
          • -jvmArgs="-Xmx8g -Xms8g" -> 8g heap
          • -i 10  -r 1 -> 10 measurement iterations, 1 second each
          • -wi 20 -w 1 -> 20 warmup iterations, 1 second each
          • -f 5 -> five forks each to expose run to run variance

          Throughput benchmark: baseline(JMH params: -bm thrpt -tu us)

          Here's some baseline results, note the unit is ops/us equal to millions of ops per second:
          SpscArrayQueue (128k capacity)
          offersFailed   0.005 ±  0.008  ops/us
          offersMade   252.201 ±  1.649  ops/us
          pollsFailed    0.009 ±  0.008  ops/us
          pollsMade    252.129 ±  1.646  ops/us

          So the SpscArrayQueue is offering great throughput, and seems pretty well balanced with failed offers/polls sort of cancelling out and low compared to the overall throughput.

          SpscLinkedQueue
          offersFailed     ≈ 0           ops/us
          offersMade    14.711 ±  5.897  ops/us
          pollsFailed   12.624 ±  8.281  ops/us
          pollsMade     14.710 ±  5.896  ops/us

          For the SpscLinkedQueue we have no failed offers, since it's an unbounded queue. We do see a fair amount of failed polls. We expect the polls to be faster than the offers as offering pays for allocation of nodes on each element (24b overhead per element), while the poll simply leaves it to the GC to toss it all away.
          With this baseline we would expect linked arrays queues performance to be somewhere between the 2 data points above. Unlikely to hit the highs of the preallocated array queue, but hopefully much better than a linked queue.

          Throughput benchmark: growable

          So assuming we let it grow to 128k, how does the SpscGrowableArrayQueue perform in this benchmark and how much does the initial size impact the performance? CNK here is the initial buffer size. The buffer will double in size when offer fills up a buffer until we hit the max size buffer.
           CNK                 Score    Error   Units
            16 offersFailed    0.006 ±  0.006  ops/us
            16 offersMade    183.720 ±  0.450  ops/us
            16 pollsFailed     0.003 ±  0.001  ops/us
            16 pollsMade     183.592 ±  0.450  ops/us
           128 offersFailed    0.003 ±  0.006  ops/us
           128 offersMade    184.236 ±  0.336  ops/us
           128 pollsFailed     0.003 ±  0.001  ops/us
           128 pollsMade     184.107 ±  0.336  ops/us
            1K offersFailed    0.001 ±  0.003  ops/us
            1K offersMade    183.113 ±  1.385  ops/us
            1K pollsFailed     0.003 ±  0.001  ops/us
            1K pollsMade     182.985 ±  1.385  ops/us
           16K offersFailed    0.007 ±  0.006  ops/us
           16K offersMade    181.388 ±  5.380  ops/us
           16K pollsFailed     0.004 ±  0.001  ops/us
           16K pollsMade     181.259 ±  5.380  ops/us

          • Under constant streaming pressure the Growable queue will keep growing until either full sized buffer is allocated (very likely) or a smaller buffer in which the throughput is sustainable is found (unlikely for this benchmark as all it takes is a single spike). If that was the case we would have no failing offers. Either way we expect transition to the last buffer to be a short phase after which the algorithm is very similar to SpscArrayQueue and no further allocations happen. The number of resizing events is small, as the buffer doubles each time (so log2(capacity/initial size), e.g. for initial capacity 16k: 16k -> 32k -> 64k -> 128k).
          • You may consider the slow down from SpscArrayQueue large at roughly 25%, but I don't think it too bad considering that with the throughputs in question we are looking at costs in the single digit nanoseconds where every extra instruction is going to show up (back of envelope: 250 ops/us -> ~4ns per offer/poll vs 180 ops/us -> ~5ns. 1ns = ~3 cycle ~= 12 instructions or 1 L1 load).

          Throughput benchmark: chunked

          For Chunked we see the expected increase in throughput as we increase the chunk size (CNK is the fixed chunk size, the max size is 128K):
           CNK                 Score    Error   Units
            16 offersFailed      ≈ 0           ops/us
            16 offersMade     43.665 ±  0.892  ops/us
            16 pollsFailed     9.160 ±  0.519  ops/us
            16 pollsMade      43.665 ±  0.892  ops/us
           128 offersFailed   ≈ 10⁻⁴           ops/us
           128 offersMade    151.473 ± 18.786  ops/us
           128 pollsFailed     0.380 ±  0.331  ops/us
           128 pollsMade     151.443 ± 18.778  ops/us
            1K offersFailed    0.309 ±  0.375  ops/us
            1K offersMade    149.351 ± 14.102  ops/us
            1K pollsFailed     0.112 ±  0.125  ops/us
            1K pollsMade     149.314 ± 14.120  ops/us
           16K offersFailed   ≈ 10⁻⁸           ops/us
           16K offersMade    175.408 ±  1.563  ops/us
           16K pollsFailed     0.038 ±  0.031  ops/us
           16K pollsMade     175.394 ±  1.563  ops/us

          • Note the decline in throughput for smaller chunks is matched with an increase in poll failures indicating that the consumer is becoming faster than the producer as the chunk grows smaller requiring more frequent allocations by the produce.
          • Note also that even with 16 slot chunks this option is ~3 times faster than the linked alternative.
          • Under constant streaming pressure the Chunked queue will be pushed to it's maximum size, which means the producer will be constantly allocating buffers. The producer resize conditions are also slightly trickier and require sampling of the consumer index. The consumer will be slowed down by this sampling, and also slowed down by jumping to new buffers. This problem will be worse as more resizing happens, which is a factor of chunk size.
          • The benefit of larger chunks will cap out at some point, you could explore this parameter to find the optimum.
          • An exercise to readers: run the benchmark with the JMH GC profiler and compare the queues. Use it to verify the assumption that Growable produces a bounded amount of garbage, while Chunked continues to churn.
          • Max throughput is slightly behind Growable.
          The main take aways for sizing here seem to me that tiny chunks are bad, but even with small/medium chunks you can have pretty decent throughput. The right size for your chunk should therefore depend on your expectations of average traffic on the one hand and desirable size when empty.

          Throughput benchmark: unbounded

          For unbounded we see the expected increase in throughput as we increase the chunk size  (CNK is the chunk size, the max size is infinity and beyond):
           CNK                 Score    Error   Units
            16 offersFailed      ≈ 0           ops/us
            16 offersMade     56.315 ±  7.563  ops/us
            16 pollsFailed    10.823 ±  1.611  ops/us
            16 pollsMade      56.315 ±  7.563  ops/us
           128 offersFailed      ≈ 0           ops/us
           128 offersMade    135.119 ± 23.306  ops/us
           128 pollsFailed     1.236 ±  0.851  ops/us
           128 pollsMade     131.770 ± 21.535  ops/us
            1K offersFailed      ≈ 0           ops/us
            1K offersMade    182.922 ±  3.397  ops/us
            1K pollsFailed     0.005 ±  0.003  ops/us
            1K pollsMade     176.208 ±  3.221  ops/us
           16K offersFailed      ≈ 0           ops/us
           16K offersMade    177.586 ±  2.929  ops/us
           16K pollsFailed     0.031 ±  0.038  ops/us
           16K pollsMade     176.884 ±  2.255  ops/us

          • The 16 chunk size is ~4 times faster than the linked list option, as chunk size increases it gets more efficient.
          • Max throughput is slightly behind growable.
          • Why is Chunked faster than Unbounded on 128 chunks, but slower on 1K? I've not looked into it, it's taken long enough to write this bloody post as it is. How about you check it out and let me know?

          Throughput benchmark: summary

          • Growable queue performs well regardless of initial size for this case.
          • For chunked and unbounded the chunk size has definite implications on throughput. Having said that throughput is very good even for relatively small chunks. 
          • Note that the results for the same benchmark without a GC cycle between iterations were very noisy. The above result intentionally removes the variance GC induces by forcing GC and allowing a large heap. The GC impact of linked array queues when churning will likely be in increasing old generation pressure as the overflow chunks are likely to have been promoted before they get collected. This is assuming a load where overflow is not that frequent and other allocation is present.

          Interlude

          Go ahead, grab a beer, or a coffee, a mojito perhaps(Norman/Viktor, go on), or maybe order a large Pan Galactic Gargle Blaster, you've earned it. I never thought you'd read this far, it's a tad dry innit? Well, it's not fun writing it either, but we're getting there, just need to look at one more benchmark...

          Burst "cost"/latency benchmark: background and method

          The burst cost benchmark is a more stable workload than the throughput one. The producer sends a burst of messages to a consumer. The consumer signals completion when the last message in the burst has arrived. The measurement is from first message sent and arrival of last message observed from the producer thread. It's a 'latency' benchmark, or rather an estimate of average communication cost via the particular thread. It's got bells on. It's a friend, and it's a companion, it's the only product you will ever need, follow these easy assembly instructions it never needs ironing.
          This is, I think, a better evaluation of queue characteristics than the throughput benchmark for most applications. Queue starts empty, is hit with a burst of traffic and the burst is drained. The cost measured is inclusive of return signal latency, but as scenarios go this is not too far fetched. Calling this queue latency is a damn sight better than PRETENDING THE BLOODY INVERSE OF THROUGHPUT IS LATENCY. <deep breath>
          Same machine and JMH parameters used as above. All the measurements below are average time per operation in nanoseconds. The benchmark code can be found here.

          Burst Cost benchmark: baseline

          Testing first with SpscArrayQueue and SpscLinkedQueue to establish the expected baseline behaviour, BRST is the size of the burst:
          SpscArrayQueue (128k capacity)
          BRST      Score     Error  Units
            1     284.709 ±   8.813  ns/op
           10     368.028 ±   6.949  ns/op
          100     914.150 ±  11.424  ns/op

          Right, sending one message has the overhead of cache coherency making data visible to another core. Sending 10/100 messages we can see the benefits of the SpscArrayQueue in allowing consumer and producer to minimize cache coherency overhead per element. We see a satisfying drop in cost per element as the burst size grows (the per element cost is the cost of the burst divided by the number of elements sent, so we see here: 1 -> 284, 10 -> 36, 100 -> 9), but this DOES NOT MEAN THE FRIGGIN' LATENCY IS BLOOMIN' DOWN TO 9ns WHEN WE SEND 100 MESSAGES.

          SpscLinkedQueue
          BRST      Score     Error  Units
            1     378.043 ±   7.536  ns/op
           10    1675.589 ±  44.496  ns/op
          100   17036.528 ± 492.875  ns/op

          For the linked queue the per element overheads are larger, as well as the cost of scanning through a linked list rather than an array as we poll data out. The gap between the it and SpscArrayQueue widens as the burst size grows. The linked queue fails to make the most of the batching opportunity offered by slack in the queue in other words.

          Burst Cost benchmark: growable

          We expect the growable queue to grow to accommodate the size of the burst. The eventual buffer size will be a tighter fit around the burst size, which in theory might be a benefit as the array is more likely to fit in cache. Let's spin the wheel (CNK is the initial chunk size, the max size is 128K):
          BRST  CNK    Score    Error  Units
            1    16  327.703 ± 11.485  ns/op
            1   128  292.382 ±  9.807  ns/op
            1    1K  275.573 ±  6.230  ns/op
            1   16K  286.354 ±  6.980  ns/op
           10    16  599.540 ± 73.376  ns/op
           10   128  386.828 ± 10.016  ns/op
           10    1K  376.295 ±  8.009  ns/op
           10   16K  358.096 ±  6.107  ns/op
          100    16 1173.644 ± 28.669  ns/op
          100   128 1152.241 ± 40.067  ns/op
          100    1K  966.612 ±  9.504  ns/op
          100   16K  951.495 ± 12.425  ns/op

          We have to understand the implementation to understand the results here, in particular:
          • The growable queue buffer will grow to accommodate the burst in a power of 2 sized array. This in particular means that when the burst size is 100 the buffer for the initially smaller 16 chunk queue is also 128. The delta between the 2 configurations becomes marginal once that happens as we see in the 100 burst which forces the initially size 16 element buffer to grow to 128.
          • The queue tries to probe ahead within a buffer to avoid reading on each element.The read ahead step is a 25% of the buffer size. The smaller the buffer the more often we need to probe ahead (e.g. for a 16 element buffer we do this every 4 elements). This overhead is visible in the smaller buffers.
          • A burst which manages to fill more than 75% will fail to read ahead with the long probe described above and fall back to reading a single element ahead. This implies that buffers that fit too snugly to the burst size will have worse performance.
          • When the buffers are sufficiently large the costs closely match the costs observed for the SpscArrayQueue. Yay!

          Burst Cost benchmark: chunked

          For Chunked we see a slight increase in base cost and a bummer when the burst size exceeds the chunk size (CNK is the chunk size, the max size is 128K):
          BRST  CNK    Score    Error  Units
            1    16  311.743 ± 11.613  ns/op
            1   128  295.987 ±  5.468  ns/op
            1    1K  281.308 ±  8.381  ns/op
            1   16K  281.962 ±  7.376  ns/op
           10    16  478.687 ± 52.547  ns/op
           10   128  390.041 ± 16.029  ns/op
           10    1K  371.067 ±  7.789  ns/op
           10   16K  386.683 ±  5.276  ns/op
          100    16 2513.226 ± 38.285  ns/op
          100   128 1117.990 ± 14.252  ns/op
          100    1K  969.435 ± 10.072  ns/op
          100   16K  939.010 ±  8.173  ns/op

          Results are overall similar to the growable, what stands out is:
          • If the chunk is too little to accommodate the burst we see a large increase to cost. Still, comparing this to the SpscLinkedQueue shows a significant benefit. Comparing to the growable version we see the sense in perhaps letting the queue grow to a better size as a response to bursts.
          • If the chunk is large enough to accommodate the burst behaviour closely matches SpscGrowableArrayQueue. Yay!

          Burst Cost benchmark: unbounded

          Final one, just hang in there. 
          BRST  CNK    Score    Error  Units
            1    16  303.030 ± 11.812  ns/op
            1   128  308.158 ± 11.064  ns/op
            1    1K  286.379 ±  6.027  ns/op
            1   16K  282.574 ± 10.886  ns/op
           10    16  554.285 ± 54.468  ns/op
           10   128  407.350 ± 11.227  ns/op
           10    1K  379.716 ±  9.357  ns/op
           10   16K  370.885 ± 12.068  ns/op
          100    16 2748.900 ± 64.321  ns/op
          100   128 1150.393 ± 26.355  ns/op
          100    1K 1005.036 ± 14.491  ns/op
          100   16K  979.372 ± 13.369  ns/op

          What stands out is:
          • If the chunk is too little to accommodate the burst we see a large increase to cost. Still, comparing this to the SpscLinkedQueue shows a significant benefit.
          • If the chunk is large enough to accommodate the burst and make the most of probing ahead the costs closely resemble the SpscArrayQueue for larger bursts. Yay!

          Burst Cost benchmark: summary

          We see a pretty much expected result for these queues, which is to say that on the fast path they are the same and therefore if the fast path dominates they show the same costs as a plain SpscArrayQueue, which is good news. When chunks are too small and we have to allocate new chunks we start to see overheads.
          A more subtle observation here is that smaller buffers have some drawbacks as the slow path of the producer code is more likely to be executed. This reflects correctly the empty queue assumption that the JCTools queues rely on, but broken assumptions are... well... broken, so the cost goes up.
          A further consideration here for smaller buffer is the hot/cold structure of the code. It is intended that the producer code inlines the "offer" hot path, but as the cold path is rarely run it will fail to inline it. This is an intentional inlining fail. Inlining the cold path will make the "offer" larger and allot more complex, making the compilers job harder and may result in worse resulting code. When we run with burst/buffer sizes which systematically violate the hot/cold assumption we can trigger a bad inlining decision. This can be worked around by marking the cold methods as "dontinline" using the CompileCommand option or the compiler oracle file.

          Mmmm... this is boring :(

          Yes... Nothing too surprising happened here, I did not emerge from the lab with my coat on fire, these things happen. One anecdote worth sharing here is that I originally run the benchmarks with only 2 threads allocated to the JVM, this resulted in noisier measurement as I effectively under provisioned the JVM with CPUs for compilation/GC or any OS scheduling contention/interrupts. When running on a 2 core laptop this is a reasonable compromise to fix the cross core topology of the benchmark, but on a server class machine it is easy enough to provision the same topology with more CPUs.
          Next part will feature the astounding extension of these queues to the MPSC domain and will be far more interesting! I promise.

            What do Atomic*::lazySet/Atomic*FieldUpdater::lazySet/Unsafe::putOrdered* actually mean?

            $
            0
            0
            Paved with well intended definitions it is.
            lazySet/putOrdered (or an ordered store) was added as a bit of a rushed/non-commital afterthought after the JMM was done, so it's description is subject to many debates on the mailing lists, stack overflow and watercoolers the world over. This post merely tries to provide a clear definition with references to relevant/reputable sources.

            Definition:
            An ordered store is a weakened volatile store[2][5]. It prevents preceding stores and loads from being reordered with the store[1][3][6], but does not prevent subsequent stores and loads from being reordered with it[2][4].
            If there was a JMM cookbook entry for ordered store defining it with barriers in mind it would seem that the consensus is that ordered stores are preceded by a StoreStore AND a LoadStore barrier[4][6].
            Ordered store is practically the same as a C++ memory_release_store[5][7].

            References:
            [1] Original bug:
            "lazySet provides a preceding store-store barrier (which is either a no-op or very cheap on current platforms), but no store-load barrier"
            See here: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6275329

            [2] java.util.concurrent docs:
            "lazySet has the memory effects of writing (assigning) a volatile variable except that it permits reorderings with subsequent (but not previous) memory actions that do not themselves impose reordering constraints with ordinary non-volatile writes."
            See here:
            https://docs.oracle.com/javase/8/docs/api/?java/util/concurrent/package-summary.html

            [3] JMM cookbook:Defining barriers meaning here: http://g.oswego.edu/dl/jmm/cookbook.html

            [4] concurrency-interest Q&A with Doug Lea, October 2011:
            "[Ruslan]:... If it happens (== we see spin-wait loop finished) -- does it mean,that all writes preceding lazySet are also done, committed, and visible to thread 2, which finished spin-wait loop?
            [Doug]: Yes, although technically, you cannot show this by reference to the Synchronization Order in the current JLS.
            ...
            lazySet basically has the properties of a TSO store
            "

            See here:
            http://cs.oswego.edu/pipermail/concurrency-interest/2011-October/008296.html

            The discussion is REALLY worth reading, involving Hans, Doug, Vitaly, Ruslan and other such respectable members of this excellent mailing list. Go on, I'll wait.

            The discussion on that thread concludes the following is required:

            ...
            LoadStore + StoreStore
            st [Y],X // store X into memory address Y
            ...

            Outcome: Stores before and after are now prevented from floating across the barrier. Loads before the barrier are also prevented from floating down. Later loads are free to float up. Note that st may in theory be delayed indefinitely, certainly other loads and stores are allowed to float up between it and the barrier.

            [5] concurrency-interest Q&A with Aleksey Shipilev, May 2016:
            "putOrdered is a release in disguise, most of the C++11 std::atomic(...,
            mem_order_release) reasoning applies here."
            "acquire/release are the relaxations from the usual volatile
            rules -- while producing happens-before-s, they drop from total
            synchronization order, thus breaking sequential consistency."

            And adds some fine examples:
            "Safe publication still works:

            int x; volatile int y;
            -----------------------------------------------------------------------
            put(x, 1); | r1 = get{Acquire|Volatile}(y);
            put{Release|Volatile}(y, 2); | r2 = get(x);

            (r1, r2) = (2, 0) is forbidden.

            But anything trickier that requires sequential consistency fails. IRIW
            fails, because no consistent write order observed by all threads. Dekker
            fails, because release stores followed by loads may or may not be
            visible in program order:

            volatile int x; volatile int y;
            -----------------------------------------------------------------------
            putRelease(x, 1); | putRelease(y, 1);
            r1 = getAcquire(y); | r2 = getAcquire(x);

            (r1, r2) = (0, 0) is allowed. Forbidden if all ops are volatile.


            Safe construction still does not work (even for volatiles!):

            A global;
            -----------------------------------------------------------------------
            A a = <alloc>; | A a = global;
            put{Release|Volatile}(a.x, 1); | r1 = get{Acquire|Volatile}(a.x);
            global = a; |

            (r1) = (0) is allowed."
            See here: http://cs.oswego.edu/pipermail/concurrency-interest/2016-May/015104.html

            [6] concurrency-interest Q&A with Aleksey Shipilev, March 2016:
            "> int a, b;
            >
            > boolean tryLock() {
            > UNSAFE.putOrdered(a, 1); // Declare intent.
            >
            > // No StoreLoad here as store is not volatile.
            >
            > if (UNSAFE.getVolatile(b) == 1)) {
            > // Reset intent and return false;
            > }
            >
            > return true;
            > }

            Even in the naive barrier interpretation that usually gives stronger
            answers, you have:

            [LoadStore|StoreStore]
            a = 1;

            r1 = b;
             [LoadLoad|LoadStore]"
            See here: http://cs.oswego.edu/pipermail/concurrency-interest/2016-March/015037.html

            [7] C++ memory_order_release definition:
            "A store operation with this memory order performs the release operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (see Release-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic (see Release-Consume ordering below)."
            See here: http://en.cppreference.com/w/cpp/atomic/memory_order
            Many thanks to A. Shipilev, M. Thompson, D. Lawrie and C. Ruslan  for reviewing, any remaining errors are their own and they shall be most severely reprimanded for them.
            
                                   

            Java Flame Graphs Introduction: Fire For Everyone!

            $
            0
            0
            FlameGraphs are superawesome. If you've never heard of FlameGraphs and want to dive straight in the deep end, you should run off and check out the many many good resources provided by Brendan Greg in his one stop shop page here. This post will give a quick intro and some samples to get you started with collecting profiles for all JVMs everywhere. I'm taking a slightly different tack then Brendan in presenting the topic, so if it turns out my explanations suck you should see if his make more sense.

            What's the big deal?

            If you've ever used a profiler to look at your code you will have seen 2 profile reports typically:
            1. Flat profile: This is often presented as the "top X" methods/classes/packages where time (or samples, or ticks or whatever) is spent. This is useful as it immediately shows up common bottlenecks across your code, but these are shown out of context. Sometimes this is enough, but often in larger application profiles context is significant. This representation is very useful when a method with a high overall impact is called from many callsites, making each callsite cheap but the method itself significant.
            2. Tree profile: This profile will present you with a call tree where each method is a node with a total and self time quantity. The self measure implies the amount of time spent in the method itself(the amout of samples in which the method is the leaf), and total is for the total number of samples in which it shows up (leaf and node).
            The problem with the tree view is that it is very unpleasant to navigate. Click click click click and as the stack deepens it becomes harder to look at and ingest. Enter FlameGraphs.
            FlameGraph represents a tree profile in a single interactive SVG where:
            The x-axis shows the stack profile population, sorted alphabetically (it is not the passage of time), and the y-axis shows stack depth. Each rectangle represents a stack frame. The wider a frame is is, the more often it was present in the stacks. The top edge shows what is on-CPU, and beneath it is its ancestry.
            Like most visualisations, it makes sense when you see it rather than explain it. Let start with data sets we can easily grasp and see what they look like.

            Synthetic Samples For Starters

            For instance, what does a single stack sample look like? The FlameGraphs SVG generating script takes as it's input a "collapsed stacks" file which has a dead simple format, frames separated by semi-colons followed by the number of times this stack was sampled. Here's a dummy handwritten example of a single sample file (call it sample.cstk):
            main;0;1;2;3;4;5;6;7;8;9;10 1

            We can feed this to the flames (now is a good time to clone this repo and try shit out):
            flamegraph.pl single-stack.cstk > single-stack.svg

            Here's what a single stack trace looks like:
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!

            But a single stack trace is just one data point, not a profile. What if we had 1M samples of this same stack?
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!

            Well.. it would look pretty much the same, but if you hover over it will tell you it got 1M samples. It looks the same because we still have 100% the same stack for the whole profile. It's the same profile.
            "BUT!" I hear you say, "But, colours?". Yes the colors mean nothing at this point, but will become interesting later. The default colour palate is red and the choice of colors is random, hence the different colour selection changes from run to run. Just forget colors for a second, OK?
            Right, next we want to look at a set of samples with a few more stacks:
            main;0;1;2;3;4;5 1
            main;0;1;2;3;4;5;6 2
            main;0;1;2;3;4;5;6;7 3
            main;0;1;2;3;4;5;6;7;8 4
            main;0;1;2;3;4;5;6;7;8;9 5


            Please Use modern Browser(e.g. recent chrome?) to see this SVG!

            Now you can also get a feel for what clicking around does and how you zoom in and out.
            By now I hope you get the picture for how a bunch of stacks and frequencies look with a simple data sets. Last synthtic example to look at has several root frames and a little more varied stacks. Lets try this:
            main;a;1;2;3;4;5 1
            main;c;1;2;3;4;5;6;7 4
            main;a;1;2;3;4;5;6 2
            main;c;1;2;3;4;5;6 4
            main;c;1;2;3;4;5;6;8 4
            main;b;1;2;3;4;5;6;7 3
            main;b;1;2;3;4;5;6;8 3
            main;b;1;2;3;4;5;6;9 3
            main;d;1;2;3;4;5;6;7;8;9 5


            And we get this:
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!

            We see here that stacks are sorted alphabetically and ordered from left to right. The ordering has nothing to do with the order in the file. The collapsed stacks format is itself an aggregation with no view on timing. So the order from left to right is only about merging, not time or anything else. We can see that stacks which share a common parent naturally aggregate under that parent. The width of each frame is it's relative total-time share. It's self-time share is it's top exposure, or how much of it is not covered by it's callees, the frames on top of it.

            Tree View vs Flames

            Now that we got the hang of this flamy thing, lets take a look at the same profile using 2 presentations. The venerated tree-view and this new hipsterish whatever flame thing. The following is a profile collected using honest-profiler for a netty benchmark:

            In typical workflow I step further and further into the hot stack, but this pushes out the big picture out of my view. I would now have to go back up and fold it to see what hides under other hot branches in the tree. It's a familiar and annoying experience if you've ever used a profiler with this kind of view. The problem is that Java class and method names are typically long, and stacks are quite deep. This is a simple application and I quickly run out of room.
            Here's the FlameGraph for the same profile (I chose green, because later it makes sense):
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!

            NOTE: I made all the flame graphs in this post narrow so they fit the layout. They don't have to be this narrow. You can set the width to whatever you like, I used "--width=700" for the graphs in this post.
            We can see the root frames quickly break out to main activities, with the deep netty stack now visible upfront. We can click and zoom easily. I also find the search ability which colors matching strings useful to highlight class/package participation in the profile. Prominent flat-tops indicate hot leaf methods we might want to look at.
            It's perhaps a matter of taste, but I love it. I've been using flame graphs for a while and they are pure genius IMO. I find the image itself is intuitively more approachable, and with the ability to quickly zoom in/out and search I can usually quickly work through a profile without losing sight of the full picture.
            So how do you get one?

            It's Bob! yay?

            Everybody Gets A FlameGraph!

            Yes, even you poor suckers running JDK 1.3 on Windows XP! I don't recommend this method of profiling if you have a more recent JVM, or if your JVM supports AsyncGetCallTrace, but if your deployment is stuck in the past you can still be in on this. This is because ALL JVMs must support JVMTI and AFAIK allow you to hit them with jstack/JVisualVM/hprof. It's a terrible way to profile, there's allot of overhead, and usually you can find a better way, but this is universally available. Collecting a sample via jstack is (a terrible idea) quite easy. Just find the pid of the process you want to profile using jps and then do something like:
            for i in {1..100}; do
              jstack <pid> >> iloveoldshit.jstk;
              sleep 0.1;
            done
            And Bob is your relative (which is a good thing apparently).
            Once you've collected a large enough sample for your application you can go on and feed flame graphs:
            cat iloveoldshit.jstk | ./stackcollapse-jstack.pl | ./flamegraph.pl --color=green > jstack-flames.svg
            And you get:
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!

            This is the same benchmark from before, but different profile with the safepoint bias. You can compare the two by scrolling up and down. OR you can use FlameGraphs to diff the 2, in a moment.
            FlameGraphs supports converting jstack output into collapsed stacks (as above). Efforts exist on GitHub to convert the hprof format which JVisualVM produces (as well as other profilers) into collapsed stack format.
            So for all my poor readers, struggling under the oppression of JVMs for which better means to spill out stack samples do not exist, I got your backs, you too can be looking at flame graphs!
            But seriously, just move on.

            Level Up: AsyncGetCallTrace or JFR

            Now, if you want a better profiler, which does not result in bringing your application to safepoint and pausing ALL your threads at each sample AND your are either running a 1.6 or higher JDK (OpenJDK/Oracle/recent Zing) on Linux/Mac/BSD you can use Honest-Profiler to collect your profile. If you got Oracle JDK 1.7u40 or higher you can use Java Flight Recorder (if you use it in production you need to pay for the licence). These profilers rely on AsyncGetCallTrace to record the Java stack safely on interrupt(from a signal handler, not at safepoint).
            To collect with Honest-Profiler I start my JVM with the following parameter:
            -agentpath:$HONEST_PROFILER_HOME/liblagent.so=host=localhost,port=4242,logPath=$PWD/netty.hpl
            Then, when I feel the time is right, I can start and stop the profile collection
            echo start | nc localhost 4242
            echo stop | nc localhost 4242
            To convert the binary format into collapsed stacks I need to use a helper class in the honest-profiler.jar:
            java -cp $HONEST_PROFILER_HOME/honest-profiler.jar com.insightfullogic.honest_profiler.ports.console.FlameGraphDumperApplication netty.hpl netty.cstk
            I can then feed the flamegraph script the collapsed stacks file and get the result which we've already seen.
            To convert JFR recordings to flame graphs see this post. But remember children, you must pay Oracle if you use it in production, or Uncle Larry might come for you.


            Bonus: Diff Profiles!

            A nice benefit of having 2 different profilers produce (via some massaging) a unified format for flame graphs is that we can now diff profiles from 2 profilers. Not something that is generally useful, granted. But diffing profiles is an established capability in many profilers, and is usually used to do a before/after comparison. Flame Graphs support this via the same visualization. Once you have converted your profiles into the collapsed stacks format you can produce a diff file and graph it:
            ./difffolded.pl -n A.cstk B.cstk | ./flamegraph.pl > A-B-diff.svg
            Diffing the Honest-Profiler and jstack profiles gives us the following:
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!  

            The white squares are not interesting, the red/pink squares highlight the delta of self samples as a percentage of total samples (not so intuitive). I admit it may seem a tad confusing at first, but at least it draws your eyes to the right places. More on differential flame graphs here.
            Note: to make this diff work I had to shave off the thread names from the jstack collected collapsed stacks file.

            Further Bonus: Icicle Graph

            Some times the bottleneck is not a particular call stack plateau, but rather a particular method being called from many call sites. This kind of bottleneck will not show well in a flame graph as the different stacks with similar tops will be split and may not stand out. This is really where a flat profile is great, but we can also flip the flame graph view to highlight the top method merging:
            cat netty.cstk | ./flamegraph.pl --reverse --invert --color=green > netty-icicles.svg
            I've filtered the stacks to show only a relevant portion:
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!
            This is not dissimilar to functionality offered by other profiler GUIs which allow the drill down direction to start from hot methods and into their callers in a tree view presentation.

            Level UP++: Java Perf Flame Graphs FTW!

            If you are so fortunate as to:
            1. Be running OpenJDK/Oracle 1.8u60 or later(this functionality is coming to Zing in a near future release, fingers crossed)
            2. Running on Linux
            3. Got permissions to make this run
            You can get amazing visibility into your system by using a combination of:
            1. Java with: -XX:+PreserveFramePointer (also recommended -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints so unfolded frame are more accurate)
            2. perf-map-agent:  This attachable agent dumps a symbol mapping file for all runtime generated code in the JVM, enabling perf to correctly resolve addresses to methods. You'll need to clone and build.
            3. perf: You'll need permissions and you'll need to install it. I assume you are root of you own machine for simplicity.
            With the above working you can collect a perf profile of your Java process(by itself or as part of whole system). This results in a perf.data file and a perf-<pid>.map file in your /tmp folder. You can then proceed to generate a collapsed stack profile from that file, the simplest way to get this going is by using a script packed with perf-map-agent:
            perf-java-flames <pid>
            This will ask for password as it needs to sudo a few things. Be the sudo you want to see in the world. After a suspenseful wait of 15 seconds you'll get this:
            Please Use modern Browser(e.g. recent chrome?) to see this SVG!
            Note: Netty deploys it's native lib into tmp, loads it and deletes it, which means perf gets lost looking for it. I deleted it from the benchmarks jar and loaded it directly using LD_LIBRARY_PATH to resole this visibility issue. It doesn't make a huge difference, but in case you try this out.

            The green frames are Java, and the rest are all the magic which happens to help Java along. Here's what happened:

            • Red frames are C library or Kernel code. We can see that the socket writes in the original profile actually go into native land. We now have further visibility down the hole. Importantly this illustrates where hot Java methods are in fact not Java methods at all and so looking at their Java code for optimisation ops is futile.
            • Yellow frames are C++. We can see the call chain leading into the interpreter and ultimately into compiled Java code.
            • Green frames are Java. BUT if you compare the previously presented profiles will this one you will notice there are some intermediate frames missing here. This is because the frames in this profile are "real" frames, or rather they map to stack frame. Inlined methods in Java do not have their own stack frames, so we can't see them (for now, we'll sort this out in a second). Further more, the keen observer will notice the familiar "Thread.run" bottom of the stack is missing, replaced by the "interpreter". As is often the case, the run method did not get compiled in this benchmark so it is not a proper compiled method for which we have a mapping. Methods in the interpreter are opaque in this profile.
            • Some stacks are broken, which can be confusing. In the example above we can see the 13.8 unknown chunk which leads to some JVM activities, but also to some Java code. More on that later.
            So, it would seem that we have gained something in visibility into the native/OS/JVM CPU utilization, but lost allot of information we had in the Java side. When is this still useful:

            • This profile is super useful if you are writing Netty and trying to workout which system calls you end up with from your JNI code, or where time is spent in that code (netty implements it's own native epoll selector, very cool). If you are writing an application which utilizes JNI libraries this profile will give you visibility across the divide. The alternative here would be to use 2 profilers and try and correlate them. Solaris Studio also offers some help here, I will one day write a post on Solaris Studio.
            • This in not a good example of a profile dominated by JVM threads, but in many profiles the GC activity will show up. This is very useful, as GC and compiler CPU utilization can get in the way of application threads using the available CPU. A Java only profiler leaves you to correlate GC/compilation logs and application profile to figure out who ate the pie. It's also an interesting view into which part of the GC is to blame.
            • Some JVM intrinsics are confusing to AsyncGetCallTrace, and invisible to safepoint profilers. The biggest culprit I see is array copy. Array copies will show up as failed samples on AGCT profilers (unless, like JMC they just fail to tell you about failed samples all together). They show up in this profile (search above for arraycopy), but only a tiny bit.
            • This profile can be collected system wide, allowing you to present a much wider picture and expose machine wide issues. This is important when you are looking at machine level analysis of your application to improve configuration/setup.
            • In depth view of OS calls can inform your configuration.

            'look at me! look at me now!' said the cat.
            'with a cup and a cake on the top of my hat!
            I can hold up TWO books!
            I can hold up the fish!
            and a little toy ship!
            and some milk on a dish!
            and look!
            I can hop up and down on the ball!
            but that is not all!
            oh, no. That is not all...


            Bonus: Inlined Frames! Threads! COLOR!

            We can win back the inlined frames information by asking perf-map-agent to create a more detailed map file with inlining data. This leads to larger map files, but should be worth it.
            You can further tweak the command line to color kernel frames differently and control sample duration and frequency. And while we're a-tweakin' lets also have threads info.
            Here's what you run:
            PERF_COLLAPSE_OPTS="--kernel --tid" PERF_RECORD_FREQ=99 PERF_RECORD_SECONDS=10 PERF_MAP_OPTIONS=unfoldall perf-java-flames <pid>
            And the graph now looks like this:
            Please Use modern Browser (e.g. recent chrome?) to see this SVG!


            • The Java frames are now green and aqua. Aqua frames are inlined frames and green are "real". This information is not presented at all by most profilers, and is pretty inaccessible in others. Here we can instantly see some interesting inlining challenges in the tall towers of same sized frames. The compiler inlines through many, but maybe eventually gives up, maybe there's something to be won by simplifying the abstraction here?
            • Thread id is added as a base frame. This is helpful in this particular example because there are only 2 interesting threads and I very much want to see this split. It also helps bring back some broken stacks into the fold. Now I can tell these frames belong to the Netty epoll thread. Yay.
            • Orange frames are kernel frames.
            • Having the thread id highlights that the none Java frames on the left are from a variety of threads. If we had more GC/Compiler work happening this may become interesting.
            • Profiling a large application with thread pools this separation by thread may not be what you want... but sometimes it is very helpful, like above. In this benchmark I have a thread generating load and a thread I want to profile, so telling them apart works great. At the moment there's no mapping of threads to thread names, but in future we may be able to easily group thread pools for more meaningful views.

            Bonus: Hack It All Up

            There's very little code in perf-map-agent, and the scripts would take you 5 minutes to read through. You don't have to use the scripts, you can write your own. You can add or enhance features, it's easy to participate or customize. Dig in, have fun :-)
            The FlameGraph scripts are nice and tidy, and the pipeline separation of [profile -> collapsed stacks -> graph] means you can read through them and tweak as you like the bits you care about without caring too much about the rest. While working on this post I played with the diff presentation a bit. It was my first ever interaction with Perl, and I'm not so very bright, and I managed to get what I wanted. Surely someone as fine as yourself can do better.
            If you look at Brenden's updates page you'll see many many people are jumping in and tweaking and sharing and making funky things. Go for it!

            Summary And Credits

            So you can have flame graphs, all of you. And you can feed these with inputs from several sources, each with their own set of pros and cons. It makes a great tool in your tool box and may give you that extra perspective you are missing in your profiling.
            There's a few people who deserve mention in the context of the tools above, look them up, they all helped make it happen:
            • Brendan Gregg: FlameGraphs proud daddy. Brendan has written allot about FlameGraphs, work through his posts and you'll learn plenty.
            • Johannes Rudolph: Author and maintainer of perf-map-agent.
            • Jake Lucianni: Contributed flamegraphs support for inlined frames.
            • Richard Warburton: Author and maintainer of honest-profiler.
            Thanks for reading :-) next up some JVM profile analysis

            What a difference a JVM makes?

            $
            0
            0
            JDK 9 is out! But as a library writer, this means change, and change can go either way... Once we've satisfied that JCTools works with JDK9, what other observations can we make? Well, one of the main motivations for using JCTools is performance, and since the code has been predominantly tested and run with JDK8, is it even better with  JDK9? is it worse?

            A JVM gets into trouble

            I started my comparison (and we will not cover anything else cause the fuckers broke early) with the simplest queue, the SpscArrayQueue:
            $ taskset -c 4-7 java -jar jctools-benchmarks/target/microbenchmarks.jar throughput.QueueThroughputBackoffNone -p qType=SpscArrayQueue -p qCapacity=131072 -jvmArgs="-Xmx1g -Xms1g" -i 5 -wi 15 -r 5 -w 1 -f 3

            Oracle8u144:
            offersFailed |    0.020 ± 0.020  ops/us
            pollsFailed  |    0.093 ± 0.102  ops/us
            pollsMade    |  361.161 ± 4.126  ops/us

            Oracle9.0.1:
            offersFailed |    0.065 ± 0.269  ops/us
            pollsFailed  |    5.987 ± 2.788  ops/us
            pollsMade    |   26.182 ± 2.273  ops/us

            Some explanations on method and results:
            • This is running on my beefy laptop, lots of memory to spare, Xeon(R) CPU E3-1505M v6 @ 3.00GHz. I set the gov'nor to "userspace" and frequency to 2.8GHz to avoid CPU frequency scaling and turbo boosting while benchmarking. I use taskset above to pin the JVM process 1 logical core on each physical core, so no 2 threads share a core. Easier than disabling HT, and sufficient for this exploration.
            • The QueueThroughputBackoffNone benchmark is an all out throughput benchmark for queues where producers and consumers spin to offer/poll (with j.u.Queue semantics). Failures are recorded and the throughput observed is the rate of successful polls per microsecond(so millions per second if you prefer that figure). The benchmark is run with a single producer and single consumer thread, and is known to be sensitive to producer/consumer speed balance as contending on empty/full queue can lead to degraded performance. See some discussion of the benchmark here.
            Back to the results. They is not good :(
            Why would this happen? HOW COULD THIS HAPPEN!!!
            I profiled this miserable little bastard only to find that on the offer side of there's a previously unobserved bottleneck:

            What's happening? The whole point of SPSC is that there's no need for a strong memory barrier, no need for lock add or CAS, just some careful ordering (using putOrdered/lazySet). But here we got this LOCK ADDL (line 34) instruction spoiling all the fun and eating all the cycles (but the next line is getting all the blame, typical).
            Where did it come from? I didn't put it there. Note that this is add 0 to the base of the stack(-0x40), which is how a StoreLoad barrier is implemented (see interesting post on barrier implementation details). But there's no volatile store in sight.

            A JVM gets out of trouble

            TBH, even before bothering to look at the assembly I reached for a big "usual suspect" for performance differences with JDK9: G1GC is the new default GC!
            I quickly verified this theory (that G1GC fucked this up for me) by re-running the same benchmark on JDK9 with the JDK8 default GC (-XX:+UseParallelGC). Got similar results to JDK8. Awesome-ish:
            Oracle9.0.1 (-XX:+UseParallelGC)
            offersFailed |    0.059 ±  0.133  ops/us
            pollsFailed  |    0.147 ±  0.251  ops/us
            pollsMade    |  356.100 ±  6.305  ops/us

            The bottleneck I've hit? it's G1GC card marking! I even wrote a post about it. G1GC write barrier is quite different from CMS/Parallel. You'll notice that the barrier discussed there is a little different from the one we see here. Times, they are a changing...
            So... the G1GC write barrier is ANGRY. Why so angry?

            What this means is that the g1_write_barrier_post is pissed because:
            • buffer (the backing array for the queue) and element (being offered) are from different regions
            • element is not null
            • card (for the buffer) is not young
            Confusingly, when playing around with this issue I moved from a small machine (8gb) to a bigger one (32gb) and when running with a larger heap size specified the issue became allot less pronounced. This is because, if we set the size of the heap to 1g we get 1mb regions. If we set the size of the heap to 8g (set both mx and ms) however we get 4mb regions. We can demonstrate this is the issue by running again with G1 and setting the region size:
            Oracle9.0.1 (-XX:+UseG1GC -Xms1g -Xmx1g -XX:G1HeapRegionSize=4m) offersFailed | 0.009 ± 0.033 ops/us pollsFailed | 0.252 ± 0.257 ops/us pollsMade | 183.827 ± 16.650 ops/us

            So, still not so brilliant, but much better. This however implies that the improvement is due to the buffer and the element being allocated from the same region. This is purely a product of the benchmark and the chosen queue size, and is not typical of normal applications. It follows therefore (I think) that the behaviour we see here is quite likely to manifest pathologically for writes into long standing data structures, such as caches and queues. And indeed if we increase the queue capacity the situation reverts back.
            All this messing around with JDK9 vs 8, runtime and GC impact comparison got me thinking of a couple of other regional collectors which might exhibit interesting behaviours here. Namely, Zing C4 and Shenandoah.

            Put some Zing in that thing

            I no longer work at Azul, so I had to get an Open Source developer licence, which was a breeze. I had to get some help with getting it all to work on Ubuntu 17.10, but 16.04 works off the bat (just follow the instructions). Starting with same parameters I got:
            Zing8u17.12:
            offersFailed |    0.016 ± 0.010  ops/us
            pollsFailed  |    0.013 ± 0.022  ops/us
            pollsMade    |  302.288 ± 3.602  ops/us

            So Zing is 20% slower than Oracle ParallelGC here, but significantly better than G1 (either case). The JMH perfasm profiler does not work with Zing, though Azul does have a matching tool if you ask their support. To look at the profile I can either use ZVision, or Oracle Studio. I went with the latter, just because.
            The profile is hard to read, so I might go into it another time, but the question that seems obvious is: "Is Zing slower than Oracle+ParallelGC because of read/write barrier costs?"
            Zing after all is not a GC add-on to OpenJDK, but a completely different runtime+GC+compiler. In particular, Zing has recently switched from their old C2 like compiler (they forked paths many moons ago, but share a parent in Cliff Click), to an LLVM compiler which is now the default called Falcon. Testing that quickly by forcing Zing to use the C2 compiler yields the following results:
            Zing8u17.12 (-XX:+UseC2):
            offersFailed |    0.034 ±  0.055  ops/us
            pollsFailed  |    0.010 ±  0.017  ops/us
            pollsMade    |  198.067 ± 31.983  ops/us

            OK, so Falcon is a big win for Zing here, that's not the issue. Can we take the read and write barriers out of the picture?
            Sure we can! Zing supports 2 exciting GCs, you may have heard of the C4 GC, but Zing also supports the NoGC (it's very Zen), which is exactly what it sounds like. Running with no GC however may remove some positive effects GC has (e.g. not crashing when you've allocated more than your heap size and never collected, but also compacted relocated data), so we need to run NoGC with barriers, and NoGC with no barriers:
            Zing8u17.12 (-XX:+GPGCNoGC -XX:+UseSVBs -XX:+UseLVBs):
            offersFailed |    0.035 ± 0.053  ops/us
            pollsFailed  |    0.022 ± 0.043  ops/us
            pollsMade    |  302.433 ± 2.675  ops/us

            So, turning GC off makes no difference at all for this benchmark. That's great, as we can consider the removal of the barriers in isolation:

            Zing8u17.12 (-XX:+GPGCNoGC -XX:-UseSVBs -XX:-UseLVBs):
            offersFailed |    0.099 ±  0.070  ops/us
            pollsFailed  |    0.027 ±  0.048  ops/us
            pollsMade    |  314.498 ± 17.872  ops/us

            Note that we see:
            1. Some improvement when barriers are removed.
            2. Increased variance in results. This was due to large run to run variance which requires further digging.
            So, while we can certainly see a difference here which is due to read/write barriers, that is not the whole story(maybe another day). My gut feeling is that Falcon is over inlining in this instance.

            Oh Shenandoah

            For Shenandoah I grabbed one of the builds provided by the benevolent Shipilev here. The build I got is this one: build 1.8.0-internal-jenkins_2017_11_12_03_35-b00
            Running with same heap size, but remembering that these builds don't default to Shenandoah:
            Shenandoah (-XX:+UseShenandoahGC):
            offersFailed |    0.031 ± 0.024  ops/us
            pollsFailed  |    0.009 ± 0.025  ops/us
            pollsMade    |  143.165 ± 3.172  ops/us

            Note that for Shenandoah there's not a massive imbalance between the offer/poll side, which indicates the issue is not pathological to one method or the other. For G1GC the problem was very clearly on the offer side. I had a peek at the assembly, but it's going to take me some time to get to grips with what's going on as it's a completely new set of tricks and quirks to look at. To get a high level view though, it's interesting to compare the HW counters for ParallelGC/Zing/G1/Shenandoah:

                                  | PGC     |  Zing    | G1      | Shenandoah
            pollsMade             | 368.055 |  303.039 | 194.538 | 140.236
            CPI                   |   0.244 |    0.222 |   0.352 |   0.355 
            cycles                |   7.629 |    9.319 |  14.764 |  19.998 
            instructions          |  31.218 |   41.972 |  42.002 |  56.294 
            branches              |   6.052 |   11.672 |   7.735 |   9.017 
            L1-dcache-loads       |  10.057 |   10.936 |  14.320 |  26.707 
            L1-dcache-load-misses |   0.067 |    0.149 |   0.162 |   0.100 
            L1-dcache-stores      |   3.553 |    3.042 |   4.057 |   5.064 

            • This is a tiny workload, with 30-50 instructions per operation. I want to make it clear that it is very easy to have large differences between JVMs in such specific scenarios. This workload is all about loading and storing references in/out of an array. The data is all L1 resident, this is NOT A REPRESENTATIVE COMPARISON OF PERFORMANCE. If you want to know how these JVMs/GCs can help your application, run a representative workload with your application.
            • Seriously, let's not start a "my JVM/GC is better than thou" war here, OK people?
            • For simplicity of comparison I've run PGC/G1/Shenandoah out of the Shenandoah build that includes all 3. This makes for a simpler comparison as they all share the same compiler, but is not comparing with the relevant Oracle build (though it should be pretty much the same).
            • Zing has better CPI than PGC, but 10 more instructions per operation. These include 1 extra load, and 6 more branches. There are no branch misses in this workload, so the branches are just extra pressure on the branch predictor and more instructions. The 10 instructions difference translates into a 1.6 cycle difference, this is indicative of the success of the branch predictor in reducing the impact of the branches. These extra branches are the Zing LVB or read barrier. Each reference load costs at extra branch. Zing is doing less stores here, this is due to it's defensive approach to card marking.
            • G1 is given here with the good case (same region), as we already covered the bad case. We see G1 increasing the loads by 4 but using less branches than Zing, only 2 extra branches. These are related to the write barrier.  We also see an extra store.
            • Shenandoah is using 3 extra instructions, and 16 extra loads. This is more than I expected. Since Shenandoah is using a Brooks-Pointer you would expect an extra load for each reference load. If we estimate from the Zing branch increase that we have 6 reference loads, I'd expect to have 6 extra loads on the Shenandoah side. I assume the other loads are related to card marking but I will need to learn more about this collector to say anything. Shipilev has expanded on my rudimentary analysis here. His conclusion: "read and write barriers around Unsafe intrinsics are very active. C2 handling on Unsafe intrinsics uses CPUOrder membars a lot (http://hg.openjdk.java.net/shenandoah/jdk10/file/1819ee64325f/src/hotspot/share/opto/library_call.cpp#l2631),which may inhibit some barrier optimizations. The workload is also tied up in a very unlucky volatile-predicated loop that prevents barrier hoisting.
              Pending codegeneration improvements alleviate barrier costs even when they are not optimized."
            Since I fully expect all the crusaders to get hot and bothered about the above I thought I'd throw in...


            A Holy Graal!

            Graal is the next gen compiler for HotSpot. Coming out of Oracle Labs and already running in Twitter production, I thought we should add another compiler dimension to this mix. To run Graal you can use: "-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler"
            Comes included in the Java 9 package!!! So how does it do?

            Oracle9.0.1 (-XX:+UseParallelGC -XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler)

            offersFailed |      ≈ 0          ops/us
            pollsFailed  |    0.347 ± 0.441  ops/us
            pollsMade    |   51.657 ± 3.568  ops/us

            WHHHHHHHHYYYYYYYYYYYY!!!!!!!!
            Why can't we have nice things? Well... It turns out the good folks at Oracle Labs have not yet implemented putOrdered as nicely as C2 and Falcon has, and have thus competely buggered up my SPSC queue :(


            Summary: There Are Many JVMs In My Father's House

            Variety is the spice of life as they say. In the narrow narrow usecase presented above we saw different JVMs/GCs/JITs throwing up all different behaviours. Some good, some less so. For this workload there's very little happening, no GC, no exciting vectorization opportunities, it's very limited. But, being limited has the benefit of simplicity and the opportunity to contrast.
            Also note that by profiling and refining this code on C2 I have perhaps overfitted it to one compiler at the expense of others, I certainly had more opportunity to eliminate any issues for the C2+ParallelGC scenario....

            I hope you enjoyed to tour, I encourage you to take these new friends home and play with them ;-).

            How Inlined Code Makes For Confusing Profiles

            $
            0
            0
            Inlining is a powerful and common optimization technique used by compilers.
            But inlining combines with other optimizations to transform your code to such an extent that other tooling becomes confused, namely profilers.

              Mommy, What is Inlining?

              Inlining is the mother of all optimizations (to quote C. Click), and mothers are awesome as we all know. Inlining is simply the practice of replacing a method call with the method body. This doesn't sound like much of an improvement, but it's potentially a massive win for a couple of reasons:
              • Expanding the scope of other optimizations! calling a method with a constant parameter may just end up getting folded into a constant for instance. Most compilers take a view that you either know everything about a method, because it's the method you are compiling ATM, or you know next to nothing about it. Calling another method may end up blocking, or having some unknown side effect, or bring the world to an end. Inlining expands the horizons of all available optimizations into the inlined code. These benefits can be quite dramatic, eliminating much of the inlined code (callee) on the one hand and assisting in supporting assumptions about the compilation unit at hand (caller).
              • Very small methods call overhead may actually exceed the method cost. Importantly, and counter intuitively perhaps, the cost of calling a method is not fixed. The cost will depend for example on calling convention (whose job is it to keep track of registers), pre/post conditions (e.g. ordering and safepoints).
              But Inlining is not without it's potential issues:
              • "code explosion". If I have a method of size 128b and that method cannot be reduced in any form by the context of it's caller, every time I inline it I increase my code size by 128b (a bit less, as I can remove the calling related prologue/epilogue). This may lead to a very bloated application, which will put pressure on my instruction cache as well as the JVM code cache. 
              • Increased CPU pressure from the compiler itself, being challenged with larger and larger compilation units will lead to higher compile times, increased CPU usage, delayed compilation etc.
              • Increasingly complex compilation units may result in less optimal code than simpler ones. This is a real world limitation, though I would think in theory the more context the compiler has to work with, and assuming an infinite amount of cpu/memory/monkeys to throw at the problem, the better overall result we should get. In practice I have seen several cases where limiting inlining can improve results.
              Picking the right boundaries is challenging, but luckily we got them bright compiler engineers to worry about that.

              How Much Inlining Can We Do?

              The amount of inlining a compiler does will depend on many factors. I'm not going to get very far if I try and explain the heuristics involved in great detail but some basic dimensions to the problem are:
              • Who is the callee? In Java almost all method calls are 'virtual'. This means that at runtime any number of implementations could possibly exist for a particular call. This is traditionally where you'd give up on inlining. To make inlining happen anyway the compiler must convert a virtual call into a static call before it can be inlined. The same code in a different runtime may induce different compiler decisions on this. JIT compilation allows the compiler to tailor the decision in a way that AOT compilation cannot (in a dynamic environment). 
              • How big is the callee? Most compilers put some upper limit on how big a caller method can grow as well as how big can a callee method be and still get inlined. A very large caller will be blocked from further inlining and even a medium sized callee might be blocked from being inlined. What is considered big will depend on the compiler, configuration and the profile. Limitations on compilation time for JIT compilers also drive this decision, at least partially.
              • How deep is the already inlined stack? How many veils can the compiler pierce? What about recursion?
              So, that's the general shape of the problem: Virtual x Size x Depth. And if you were gonna try and write an inlining strategy for a compiler you'd need allot more details, but we can keep going with this for now.

              How Much Inlining Does Java Actually Do?

              And by Java I mean the Oracle JDK 8u161. These things change whimsically from run to run, release to release, JVM to JVM etc, so stick a big YMMV on this shit.
              We can have the JVM helpfully share with us compilation details via a bunch of flags, but most importantly for this feature we will need: -XX:+PrintInlining -XX:+PrintCompilation (output is a bit hard to reason about because concurrent logging...).
              For the sake of demonstration, here's the output of the offer method of SpscArrayQueue from a throughput benchmark I run regularly for JCTools:

              Some observations:
              • Tiny methods and some intrinsics are inlined even by the early compilation.
              • The offerSlowPath method which is not inlined by the first compilation is inlined by the C2 compiler at a second pass.
              This is not a particularly challenging piece of code, but I picked it because it is easy enough for a human to follow. Small methods get inlined, even if they are nested a bit in other tiny methods.
              There's a performance related nugget to dig here:
              • I split the offerSlowPath method with the intent of not having it inlined (or at least leaving this decision to the compiler). The compiler decided it is better off inlined. Who's right? has the compiler defeated my puny human brain? am I as worthless as my mother in law suspects? these questions will be answered some other day.

              Where Do Profilers Come In?

              In traditional Java profilers no attempt is made at distinguishing between inlined frames (calls to methods which have been inlined) and 'real' frames (real method calls on the stack). Telling the 2 apart can be useful:
              • Inlining is often a good thing, so failure to inline a method can be worth looking into. It's also instructive to see just how much the compiler can inline, and how many layers of distraction it must cut through to get to where the actual code you want to run is. Excessive "Manager/Abstract/Adapter/Proxy/Wrapper/Impl/Dimple/Pimpl" layering however can still prevent inlining and may also lead to erectile dysfunction, early onset dementia and being knifed by your colleagues.
              • Top of stack hot methods which are not inlined present obvious curios. Perhaps callers are too large, or the method itself?
              • Multiple implementors of same methods which are not inlined indicate a megamorphic callsite. If one of these is significantly hotter than the rest you might be able to special case for it (peel off the callsite with an instanceof check, effectively rolling your own conditional inlining).
              But missing out on some good opportunities is not the issue I'm concerned with in this post. The point I'd like to get across is that Inlining makes an already confusing reality much much more confusing.

              A Confusing Reality

              CPUs execute instructions, these instructions are not our code but the product of a series of compilation and optimization steps. It follows that if we hope to learn where the bottleneck in our code/application is we must be able to translate back from CPU instructions (the product of C1/C2/Other JIT compilers) to bytecode (the product of javac/JVM language compiler/bytecode generating agent/Other) to Code coordinates (file + line or class + method).
              This is not a straight forward mapping because:
              • Some instructions do not map to any bytecode: this can happen when the JVM generates code for it's own accounting purposes.
              • Some instructions map to many bytecodes: the compiler can be quite clever in substituting many operations with one specialized instruction, Or when the result of one computation can be reused elsewhere.
              • Some bytecodes do not map to Code: this can happen due to bytecode generation.
              • Sometimes the compiler just fails in it's book keeping :-(.
              A modern JIT compiler employs a significant number of optimizations to any given method, allowing it to deliver us the speed we love, but making reverse engineering the instructions into code a challenge. The mapping rules employed by profiling tools trying to bridge this gap, and by the compilers trying to give them a usable index boil down to:
              • We assume instruction X translates to at most 1 bytecode location.
              • When a mapping does not exist, find the nearest instruction Y which has a mapping and use that.
              This is already confusing, as the imperfect mapping, coupled with the compiler's license to reorder code combine to make "nearest instruction" possibly be "not nearest line of code".
              The problem is made worse by the inaccuracy of the profiler observations themselves. It is a known issue, largely unsolved, that the reported instruction from which the mapping starts is often a few (in theory many) instructions away from the intended sample location. This can have a knock on effect on the ultimately blamed/mapped code. This issue was touched on in a previous post on AGCT profilers.
              When looking at a single method this is often a pesky, but ultimately managable problem. Sure, the profiler pointed to line A, but any fool can see the problem is line B... silly little hobbit.

              MOAR, BIGGER, CONFUSION

              So, given that:
              • The instruction reported is a few instructions off.
              • The instructions do not map perfectly to lines of code.
              • The order of instructions does not have to match the order of the code.
              We can see how even a small inaccuracy can translate into a large distance in lines of code between the cost we are trying to profile and where we end up. This is made exponentially worse by inlining.
              Where we had one method, we now have many methods, from many classes. As the code gets pulled in it is subjected to the same compiler "remixing", but at an expanded horizon. For instance, a computation repeated in the caller and callee can now be done once. Considering that inlining 4-6 layers of indirection is very common the "LOC distance" between "near instructions" can end up being very confusing.
              I should really elaborate with some clear example, but I do not have a repeatable experiment to hand and already stalled with this article for a few months while distracted elsewhere. An interesting example of how this could work out is available here. In this issue we have the wrong instruction being blamed for the cost of a cache-miss in a loop. The cache miss originates in the bowels of an inlined LinkedHashMap::get. The nearest instruction with a mapping however is from Field::binaryValue(line 441) it's easy to see how this would end up a very misleading profile if considered from a code perspective. When observed at the assembly level the instruction slip is much easier (as much as reading assembly is easy) to see through, but profiling on the assembly level is something very few people do and very few tools support.

              Summary

              Inlining is good, and profiling is good. But profiling modern compiled inlined code presents us challenges which are not easily solvable by looking at the common output of most tools. The more sense the compiler makes of our code, it's context, it's execution tree, the less obvious the mapping between instructions and code.
              So, what can we do? I think we should have our tools approach the problem in 2 ways:
              1. Fuzzier profiles: What would happen if instead of assigning cost to a single instruction (and it's code coordinates) we distributed the cost on a range of instructions? We could have more or less sophisticated models, but even a naive bell curve around the sampled instruction would bring in more context to the profile, and may help. This is hypothetical, as I am not aware of any tools which attempt this ATM.
              2. Differentiate between real/inlined frames: This is something the JVM is certainly capable of doing, but most APIs for profiling do not support. AFAIK only Oracle Studio and perf/bcc + perf-map-agent support views on real vs. inlined. I tend to use the latter more than the former. There's talk of adding support for this into async-profiler.

              Many thanks to V. Sitnikov for reviewing :-).

              This post is one of a few on the topic of profiling, check the other ones out:
                Viewing all 76 articles
                Browse latest View live