Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Security flaws caused by compiler optimizations (redhat.com)
136 points by nfltn on Aug 21, 2019 | hide | past | favorite | 56 comments


> However, the programmer failed to inform the compiler that the call to ereport(ERROR,...) does not return. This implies that the division will always execute.

I don’t think that’s correct. The compiler is allowed to assume that functions marked noreturn do not return, but it’s not allowed to assume that functions not marked noreturn do return. In other words, it’s not undefined behavior for a function to call abort(), enter an infinite loop, etc. instead of returning. It would be very strange if it were!

There’s a somewhat related spec clause that lets the compiler assume that certain types of loops will eventually terminate [1], but that doesn’t apply here. Therefore I think the mentioned compiler optimization is illegal. The issue was reported back in 2011; it would be interesting to see whether newer versions of GCC, or Clang, behave the same way.

[1] https://stackoverflow.com/questions/16436237/is-while1-undef...


An infinite loop with no side effects is undefined behavior.


I didn't say no side effects. But even an infinite loop with no side effects is well-defined if the controlling expression is a constant expression; see the link in my previous post.


That seems to be the conclusion of the mailing list thread as well, there was an initial bad gcc bug report that didn't understand the problem, seemingly no follow-up with a proper bug report.


I remember when CVE-2009-1897 was current, because it is an obvious example where no one would expect the null check to work:

    struct sock *sk = tun->sk;
    unsigned int mask = 0;
    if (!tun)
Obviously "sk" is not used until after the null check, but if we read it line by line as we would expect a naive compiler with no optimization to act, the pointer is followed before the null check, and null should produce a crashing program. It would seem that anyone expecting it to work would assume the evaluation of the initial assignment of sk to be lazy at first use, which is a very strange assumption.

Still, I remember in 2009 people writing about that snippet as if it is a surprising result and the compiler did something wrong.


The article addresses this.

The reasoning is that, since the pointer has already been dereferenced (and has not been changed), it cannot be NULL. So there is no point in checking it. This logic makes perfect sense except that in the case of the kernel where NULL might actually be a valid pointer. The default selinux module allowed mapping the zero page, converting this bug into a privilege escalation flaw. This was however later corrected by preventing processes running as unconfined_t from being able to map low memory in the kernel.

EDIT: On rereading your comment, I think I realized you might be getting at something a bit different, which is that even if NULL is a valid address, no one in their right mind should be dereferencing it so this code is still illogical from a human perspective (to do a NULL check after derefencing) and there is no good reason to do so. That seems to make sense to me, but I don't have any production C experience.


When NULL corresponds to a valid address, you don't want the check stripped out of this sort of code.


Out of curiosity, what are the expected contents of the zero page area? Is access allowed to it just because it's coming from the kernel instead of a userland process?


That is platform dependent, and doesn’t matter. NULL need not be the ‘all zeroes’ bit pattern (1), and even if it did, the C standard says dereferencing a NULL pointer leads to undefined behavior (https://en.wikipedia.org/wiki/Null_pointer#Null_dereferencin...)

(1) Recent C standards have peddled back a bit on ‘it should be possible to write a confirming C compiler for every CPU ever made’ (for example, IIRC, by fixing a char to be 8 bits), so that might be a thing of the past.


On my ARM Cortex processor it's __StackTop

On a read it's a completely valid address. Write generates a bus fault.

Atmel ATMega parts it's the reset vector. A write does nothing.


Braino. ATMega parts address 0 is the R0 register.


In this case it's because it's in the kernel (why don't they unmap it? There's presumably a reason...) and I've no idea whether it might contain something useful or not.

There are systems, typically older ones, where all addresses are valid, including whichever NULL corresponds to. Real mode x86 is one such system - the bottom of memory there contains the vector table.


As an example: 16 bit x86 puts the interrupt table at linear address 0.

On later x86 you can map that page to whatever you want in kernel mode and it will work. But expect C programs to do weird things for what should be crashing bugs.


The other replies to your post talk about various older or embedded systems, but here's the answer for the typical systems actually affected by that CVE, running 32-bit or 64-bit x86:

If nothing is mapped at 0, the kernel will fault just like userland would. This results in a kernel panic.

However, the kernel and userland share an address space. On a 32-bit x86 system with the default configuration, Linux allocates addresses 0 to 0xc0000000 to userland, and 0xc0000000 to 0xffffffff to the kernel. [1] (Each userland process had its own page table, but the kernel mapped itself into every page table.) This is unavoidable to some extent, because an interrupt or system call switches the system to kernel mode and jumps to a kernel-provided address, but does not automatically swap the page table, so at least the interrupt handler needs to be mapped into every page table. [2] x86-64 is similar, but with the upper half of the address space reserved for the kernel.

So why can't user code mess with the kernel's data? Each entry in the page table has a single privilege level bit. If it's set, both user and kernel code can access the page; if it's clear, only kernel code can access it. [3] At the time, there was no way to make memory accessible from user code but not the kernel, as that was considered unnecessary. Thus, userland couldn't access kernel pointers, but the kernel could directly load/store pointers belonging to the current user process. This was used intentionally when the kernel needed to copy data in or out of the process, but it also meant that if the kernel code accidentally dereferenced a bad pointer, it could end up referring to userland data.

That included the null pointer: accesses to it would succeed if and only if the current user program had previously mapped something at address 0, via mmap() with the MAP_FIXED flag. And that's what the exploit code did.

The page tables are under the kernel's control, so the kernel could make null pointer dereferences unexploitable (resulting in a kernel panic but nothing more) simply by refusing to allow user processes to map memory at address 0 – and in fact Linux already had an setting to do so (mmap_min_addr). But it was an setting rather than simply hardcoded into the kernel, because... well, some real software actually depends on mapping address 0 for silly reasons, mostly pseudo-emulation software like dosemu and wine which directly runs the emulated code in its address space. So not all systems had the setting enabled, and there was also a separate issue where enabling SELinux would cause mmap_min_addr to be ignored. [4]

Years later, Intel added an extension in newer CPUs called SMAP (Supervisor Mode Access Protection), which is simply a flag that makes the kernel fault if it tries to access pages marked as accessible to userland. In other words, the privilege bit now selects between kernel only and user only. Much saner – after all, even with mmap_min_addr blocking exploitation of null pointers, other garbage pointers could still end up pointing to userland, which made it easier to exploit the kernel (though, compared to the situation with null pointers, it's more often a question of "how easy is it to write an exploit" or "how reliable is the exploit" than of exploitable versus unexploitable). The kernel and userland still share an address space, though, so the kernel can just toggle off the flag when it's intentionally accessing userland data.

(Even later, the Meltdown hardware vulnerability triggered the implementation of kernel page-table isolation, but that's another story.)

[1] https://lwn.net/Articles/75174/

[2] https://stackoverflow.com/questions/32598810/does-cr3-change...

[3] https://webcache.googleusercontent.com/search?q=cache:b5g4ss...

[4] https://blog.namei.org/2009/07/18/a-brief-note-on-the-2630-k...


Thanks for the very detailed explanation!

If I understand correctly, the fact that if a page is mapped by a process at address zero allows both userland and kernel code to trigger unexpected code paths, since page access isn't exclusively kernel or userland. The optimizations mentioned in TFA add even more potential for issues, since userland code could control pointers in that zero page to point to arbitrary data in userland that the kernel can read.

This is fascinating, I didn't know it was possible to share pages between userland and the kernel, and always assumed those two were strictly segregated.


Yep. Something I didn't mention is that if you just try to allocate memory without using MAP_FIXED to force a particular address, the kernel will never choose address 0, regardless of the value of mmap_min_addr. That's true even if the entire rest of the address space is filled. Therefore, userland programs can rely on accesses to address 0 causing a fault unless they specifically ask to map it, which makes the compiler optimization in question perfectly reasonable for most of them. After all, a userland program doesn't worry about being exploited by itself.

(There's still potential for unexpected behavior in those userland programs that do map 0, like wine and dosemu. Even if those programs themselves are compiled with -fno-delete-null-pointer-checks – I'm not sure whether they are – they link to system libraries which aren't. Oh well.)


The assignment merely takes the address of tun and adds a small value. That wouldn’t cause a crash.

The surprise is that the if (!tun) check is optimized sway because if tun is NULL the assignment causes undefined behavior which the compiler does not have to take into account.


No, it fetches it - tun->sk is copied. The address calculation case would be something more like struct sock * * p=&tun->sk.


My take, after 12 years of industry C++ experience, working on code that needs to be fast: Too much emphasis is placed on gaining another 0.5% performance improvement, instead of slightly slower code that does what was intended. At least offer some safe defaults and make the bleeding edge optional.

While we are debugging things like this, people are writing servers in JavaScript and web services in Python :O No need to optimize so heavily, they will waste it anyway :)


I've mostly written C for embedded. Which sometimes needs to be fast. I agree with you.

It definitely feels like as compiler writers continue to gleefully add more footguns to C/C++. Application programmers vote with their feet by using slower or much slower interpreted languages like Java, C#, JavaScript, Python Ruby, and PHP. Meanwhile system programmers are eyeing golang and rust for infrastructure.


That 0.5% improvement can help a lot when it's inside the Javascript engine or the Python interpreter.


It's usually a 0.5% improvement on a micro benchmark.

One of my suspicions is that at the low end where I operate the marginal cost of higher speed is essentially zero. My firmware spends more than 9.99% of the time sleeping. Micro optimizations of a few percent is meaningless. At the other end superscalar processors are a moving target for micro optimizations. And further a lot of tasks look like init -> process data -> clean up. Over time the process data part has gotten very large. Making the init and clean up parts of the code a smaller and smaller percentage of the execution time. Micro optimizations in those parts of the code provide no value. Next is the constant movement to push the data processing into either specialized CPU instructions or GPU's.


A single optimization pass might only improve a microbenchmark a bit, but all passes taken together significantly speed up most programs. In the embedded software that I have experience with we eventually had to turn on optimizations because otherwise we would have had to switch to a new hardware platform to run continuously more demanding workloads.


I've spend 3 years building a java code base with a class of requests having an average response time of < 1 ms. And the entire application had a 99%q response time of < 10ms. Including GC and everything.

Quite honestly, after a more years of experience: Cache smart and batch-query. Network latency, aka lightspeed in fiber or coppper is our enemy. Not a JVM GC'ing in a controllble way. If the CPU cache is your issue, you can either correct me, or you're abusing the network without realizing it.


Some systems will have stricter latency requirements than that -- microseconds, always, no exceptions (e.g. studio audio, network packet processing, industrial controls). Others will have maximizing throughput as a goal (e.g. x264). In both cases CPU cache could be a bottleneck and GC would be the enemy.


The leaking of sensitive data due to DCE'd memset is an interesting one. Generally, compilers are free to temporarily move data around to a lot of places, such as on the stack for register spilling. Is there any programming language at all which allows sensitive data to be annotated in such a way that the compiler will promise not to leak it to memory indefinitely in some sense? (E.g. all places that the data may be written to are cleaned up before reaching some sort of security boundary)


>Is there any programming language at all which allows sensitive data to be annotated in such a way that the compiler will promise not to leak it to memory indefinitely in some sense?

Somewhat related: there's a recent paper that develops a language with syntax for marking data as secret; the compiler then goes even further and avoids timing side-channel leaks:

Cauligi, Sunjay, et al. "FaCT: A flexible, constant-time programming language." 2017 IEEE Cybersecurity Development (SecDev). IEEE, 2017. http://www.sysnet.ucsd.edu/~bjohanne/assets/papers/2017secde...


Perhaps Rust, using Pin? I'm not entirely clear on the guarantees made there, perhaps an expert could weigh in on if that's a viable option?


A better title would be security flaws caused by relying on undefined behavior.


Given that practically anything under the sun is undefined behavior for C and C++, that isn’t saying much.


It says everything. Even if undefined behavior works one way on all platforms today, it could work differently tomorrow in a way that introduces bugs. It's nonsensical to blame the compiler for conforming to the standard in a way that breaks code using undefined behavior.


What gives you this imimpression?


The C standard includes an appendix that lists ~200 examples of undefined behavior. This list does not claim to be exhaustive.

Often, what constitutes undefined behavior is non-obvious (and not well justified). For example, when adding two signed integers results in an overflow, it is undefined behavior even if your program never uses the result.

Due to C's definition of "undefined" behavior, it means that all of the guarantees we rely on to ensure security go out the window whenever the programmer steps on one of these land mines.


Not all UB falls into this category. A lot of UB, such as your signed integer addition example, is dependent upon the behavior of the underlying hardware. Certain archs may throw an exception on signed integer overflow, or exhibit otherwise inconsistent behavior, for example. The standard is the standard, of course, but not all implementations inherit the UB of the standard.


Whether or not something is "undefined behavior" has nothing to do with the hardware. The C specification says what is specified, what is "unspecified", what is "implementation defined", and what is "undefined".

If something is "undefined" according to C, you can't rely on what the hardware does, because the hardware might not even get a chance to do anything. The compiler may completely elide sections of your program -- and they do in practice (for example, bounds checks).

Actually, hardware always does something reasonable for add instructions (throw exception or overflow or saturate). It's additions in C that can have unreasonable results.


> A lot of UB, such as your signed integer addition example, is dependent upon the behavior of the underlying hardware.

That problem is you can no longer depend on that because the compiler writers have decided they can do anything during AST optimization when there is undefined behavior.


That's because...they can. Because the behavior of the statement is undefined.


“Those who can make you believe absurdities, can make you commit atrocities.” -Voltaire

Who knew he was talking about C compilers??


What about I-know-what-I'm-doing const_casts in critical software like operating systems and kernels? Do you think developers should distribute binaries they know work, or let people maybe cause bugs with compiler options?


There is no guarantee it would work if it relies on undefined behavior--that's what undefined means, and also the reason the optimizer acted the way it did. In that case, adding flags that defines what happens when particular undefined behavior hits would probably be the way to go, or rewrite it in a way that doesn't rely on undefined behavior.


The concept of "undefined" is incoherent. At the same time as people insist the compiler can do anything under the circumstances, everyone accepts that there is some limit to what it is reasonable to expect it to do. It's all just quibbling over where exactly the limits are. But as long as there are limits, the definition of undefined was never valid.

It seems to me that the problem is that trying to define undefined behavior is an inherent contradiction.

Setting aside the question of what exactly "undefined behavior" means, why does a language spec have to include it? If there is behavior that cannot be defined, why not just omit it from the standard?


> Setting aside the question of what exactly "undefined behavior" means, why does a language spec have to include it? If there is behavior that cannot be defined, why not just omit it from the standard?

The original reason was that there were things they didn't want to define. For example, signed integer overflow works differently on different hardware architectures. If they defined one behavior in the standard then compilers for architectures that didn't do it that way would have to do something inefficient to make it work the way the standard says it should rather than the way that hardware actually does it.

Calling it "undefined behavior" lets the compiler do whatever the hardware does even if that means the program produces different results on different architectures. It also means that if some new architecture comes out that does it slightly differently, nobody can be surprised when compilers use the native overflow behavior for that architecture.

The flaw was in giving compilers too much discretion. They were generally expected to implement one of the sane versions of signed integer overflow, and specifically the one corresponding to the relevant hardware architecture, but according to the spec they can literally do whatever they want. So we get this:

https://kristerw.blogspot.com/2016/02/how-undefined-signed-o...

  x + c < x       ->   false
Which means you can't use that to check whether signed overflow occurred even when you know the underlying hardware behavior, because if it did occur you've already invoked UB and the compiler is allowed to do anything, including omit your check, which it does.

What would help a bit is if compilers are going to do something like this, they emitted a warning something like "comparison is always false because signed integer overflow is undefined."

What would help even more is for the next version of the standard to convert a lot of this undefined behavior into implementation-defined behavior or similar, which still allows for hardware-specific implementations but requires them to be documented and prevents a lot of this unintuitive ex post facto "optimization" that causes more trouble than it's worth.


"Calling it "undefined behavior" lets the compiler do whatever the hardware does even if that means the program produces different results on different architectures"

Isn't this "implementation dependent", rather than "undefined"?


This is too narrow a view on things imo.

> The original reason was that there were things they didn't want to define.

For signed integer overflow, maybe. I don't claim to know how this evolved in every last detail, but this is definitely what UB is currently for - there's specifically "implementation-defined behavior" (actual behavior must be documented by the implementation) or "unspecified behavior" (can be non-deterministic, possibly limited) for what you are describing.

http://eel.is/c++draft/intro.abstract

Undefined behavior is what allows many optimizations to be made in the first place, and it is also necessary so that compilers don't have to solve the halting problem.

> What would help a bit is if compilers are going to do something like this, they emitted a warning something like "comparison is always false because signed integer overflow is undefined."

Yes, in that specific case that would be a useful warning. Linters can do that for you. But compilers make use of this assumption all the time, for example when optimizing for loops. Would you like a warning every time the compiler made your loops faster by relying on this UB? Every time a pointer is dereferenced?

> What would help even more is for the next version of the standard to convert a lot of this undefined behavior into implementation-defined behavior or similar, which still allows for hardware-specific implementations but requires them to be documented and prevents a lot of this unintuitive ex post facto "optimization" that causes more trouble than it's worth.

For a lot of UB that is not even an option. How do you find the correct initialization order for dynamic initialization? You can't, you'd have to solve the halting problem. It's the programmer's job to get this right, not the compiler's. What should messing this up result in, if not UB?

And you may not like it, but p0907 (which requires signed integers to use two's complement) suggested to make signed integer overflow defined and had that suggestion strongly declined. You put "optimization" in quotes but that's exactly what this is about - in practice it would make tons of code (in particular loops) significantly slower to eliminate this UB. You're free to doubt WG21 but I won't.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p090...

Note that there are compiler switches in most compilers to make signed overflow defined if this is your main gripe with UB.


> Would you like a warning every time the compiler made your loops faster by relying on this UB?

Yes, because then I know to convert the loop counter to unsigned, which it ought to be anyway so that there isn't problematic behavior if the signed value actually did overflow when using a compiler or compiler flags that don't take that optimization.

> Every time a pointer is dereferenced?

Every time a pointer is dereferenced and the compiler uses that fact to cause some other statement to have no effect? I want to see that warning, yes.

> You put "optimization" in quotes but that's exactly what this is about - in practice it would make tons of code (in particular loops) significantly slower to eliminate this UB.

That's an argument for why it shouldn't be two's complement, not for why it has to be fully undefined behavior. If you're going to make signed integers never overflow when used as a loop counter, what's wrong with documenting that and offering a warning in -Wall or -Wextra when it happens?

And it's nothing specifically to do with signed integer overflow. If you're removing code the programmer wrote or making conditional statements unconditional because it can only happen in the presence of UB, that's a huge red flag that there is a bug in that program and the compiler should not be silent about it.


> then I know to convert the loop counter to unsigned, which it ought to be anyway

See below.

> Every time a pointer is dereferenced and the compiler uses that fact to cause some other statement to have no effect?

No, every time any logic in the compiler does any UB-based inference. And that's essentially always. For example, you can't reorder variables unless you assume the abstract machine semantics and memory model. You can't elide repeated loads either. And eliding reloads is a very simple case of some expression having no effect.

More generally, for every compiler optimization I can tell you a UB source that breaks that optimization (i.e. a UB-backed guarantee of the standard that the compiler has to rely on). So should we not do any compiler optimizations at all? That's freely available to you in every compiler.

> That's an argument for why it shouldn't be two's complement, not for why it has to be fully undefined behavior.

Sorry, I don't understand your point. Why should integers not be two's complement? How would that help?

> If you're going to make signed integers never overflow when used as a loop counter, what's wrong with documenting that and offering a warning in -Wall or -Wextra when it happens?

What do you mean "when it happens"? The compiler can't in general determine at compile-time whether a loop counter will overflow. Are you suggesting all loops with signed counter should produce a warning because everyone should be using unsigned loops? If you want slow-but-safe-by-default language then you are simply at the wrong address with C++.

> If you're removing code the programmer wrote...

That's essentially the compiler's whole point. Cut through all the abstraction and generate efficient machine code. Yes, I want that three-iteration loop unrolled, I don't actually intend to perform 3 increments and 4 comparisons (and special overflow handling) in machine code. Yes, I want all those container access functions (full of unspoken range assumptions) or recursive variadic templates (full of unconditionally-false-once-expanded ifs) inlined and not appearing at all in the assembly. Try running a no-optimizations build of any larger piece of C++ software and see what I mean.

> ...or making conditional statements unconditional because it can only happen in the presence of UB that's a huge red flag that there is a bug in that program and the compiler should not be silent about it.

You're thinking of trivial situations where the compiler could reasonably guess that relying on UB causes an unwanted optimization. But you can't build a compiler around only the nice and happy cases - what if that situation occurs 4 levels deep in some template code where 3 other functions were already inlined and the compiler can see that in that specific case some condition cannot be true without UB. Would you like a warning about every such case?

It's just not the compiler's job to second-guess your code. There are tools (specifically linters) that are built to detect these easy cases you're thinking of and help you find these bugs (but they won't help you with bugs in the hard cases either).


> But as long as there are limits, the definition of undefined was never valid.

There are always limits. Your CPU is (for the most part) deterministic, and no amount of UB will change that (well, the nuclear missiles launched due to UB might...).

> It seems to me that the problem is that trying to define undefined behavior is an inherent contradiction.

Here is the definition of UB according to the C++ standard:

    "This document imposes no requirements on the behavior of programs that contain undefined behavior."  
http://eel.is/c++draft/intro.abstract

Don't try to define or reason about the consequences of UB, that's pointless. Just don't provoke any undefined behavior and you get to live in the clearly defined world of the standard.

> Setting aside the question of what exactly "undefined behavior" means, why does a language spec have to include it? If there is behavior that cannot be defined, why not just omit it from the standard?

"X is UB" means "compiler writers may freely assume that X is not done". If you omit that then compilers would have to verify that X is not done, and there are requirements in the standard which would require the halting problem to be solved in order to verify them in user code. The standard likes to avoid forcing compilers to solve the halting problem.


"This document imposes no requirements on the behavior of programs that contain undefined behavior."

The NY Vehicle and Traffic law imposes no requirements on the behavior of drivers who engage in cannibalism. However, it would be odd to interpret this as meaning that if you commit cannibalism, you are exempt from all rules regarding motor vehicles.

There are clearly two kinds of "undefined" behavior - the kind that is defined as undefined, and the kind that is not. To understand either, you have to understand both.


> The NY Vehicle and Traffic law imposes no requirements on the behavior of drivers who engage in cannibalism.

Are you trying to argue that the standard quote is unclear? That you think it can be read "imposes no additional/special requirements" (because that's the interpretation that your traffic law argument assumes)? Because if you ignore the nonsensical meaning, I would read your traffic law sentence as "imposes no requirements whatsoever".

Regardless of what your stance is regarding possible ambiguity in the way that sentence is worded, both the intent and the practical consequences of that statement are abundantly clear: If your program has UB (per what the C++ standard considers UB), then the C++ standard makes absolutely no guarantees what will happen when you run it.

> There are clearly two kinds of "undefined" behavior - the kind that is defined as undefined, and the kind that is not. To understand either, you have to understand both.

I don't understand what you are trying to say. There is only one kind of undefined behavior. If you follow the rules of the C++ standard you get to live in a nice and predictable world. If you don't, anything can happen and you're on your own.


There's more than one kind of undefined behavior, and probably more than one way to categorize it.

The distinction I was making is between "what the...standard considers UB" and what the standard doesn't consider period. For instance, the standard doesn't (I assume) declare anything about the effect of cosmic rays on C++ programs. However, that does not mean that C++ compilers are designed or should be designed not to work unless run on equipment that is completely shielded.

There is a semantic difference that seems important to me, but which continually slips away in these discussions. And it's palpably related, for me, to the issues people have with compiler behavior. It's not totally the standard at issue, I don't think, but the culture that provides its context.

It would be nice if following a language standard meant that you get to live in a nice and predictable world, but isn't this an absurd statement?


What about I-don't-know-what-I'm-doing-const-casts-in-my-kernel: https://lore.kernel.org/lkml/CAKwvOd==SCBrj=cZ7Ax5F87+-bPMS9...?


Basically, every kernel needs an explicit_bzero() system call because it's very difficult to assure data flow properties of de/initialization without something the compiler cannot optimize away.


memset_s() was added to C11 for this.


memset_s was added to C11 in an optional annex, and my understanding is that there are zero platforms that actually implement it. (Microsoft implemented an early draft of Annex K that doesn't actually include memset_s.)


Most libc's added an insecure version of memset_s, doing only the above discussed compiler-barrier, but not a memory-barrier, which is needed for Spectre, broken HW. The default memset should do the compiler-barrier. But unfortunately you cannot talk with libc maintainers about security. Too much arrogance. Thanks to this Redhat article for supporting the user-base on this.

You can use my safeclib, which implements the Annex K extensions.


It's present on Mac OS X.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: