# JavaScript Bit Array

Good code has to satisfy a lot of quality factors, such as readability, reusability but also performance. Especially online users aren't known for their patience. That's why writing code with very low level algorithms can speed up things a lot. To abstract all this hacking a bit, I publish a package called **BitSet.js** today.

The library makes it easy to work with bit vectors of any size. Integers on the other hand have the limitation of 32 or 64 bit - and in JavaScript just 31 usable bits. So check it out!

In fact, it's really the same as working with integers directly, but with the benefit of not worrying about the size of the bit-range.

Take the following example:

var a = 0; a|= 1 << n; // Set bit at position n a = ~a; // Invert bits a = a & MASK; // Mask bits

It is quite handy, but not when *n* becomes larger than the maximum log base 2 of the integer. For large *n*, the following is the same as above, but much cleaner and also scalable:

var a = new BitSet(100); // Reserve space for at least 100 bits a.set(n, 1); // Set bit at position n a.not(); // Invert bits a.and(MASK); // Mask bits

You might also be interested in the following

- Store small numbers in big numbers
- Combine CSS and JavaScript with lighttpd
- Precise calculations in JavaScript
- Flag based COUNT using MySQL

**Sorry, comments are closed for this article. Contact me if you have an inventive contribution.**

## 3 Comments on

„JavaScript Bit Array”Hey Robert,

I wrote something kind of similar (based on the limitation of accurate bits in JavaScript), but it's a full BigInt implementation (mostly ported from Python):

https://github.com/blixt/js-bigint

By the way, I believe the full number of bits that can be stored in JavaScript doubles without losing data is 53 bits, although you can only perform bitwise operations on the 32 least significant bits since the runtime converts them to 32-bit ints when performing bit operations. To use the full 32 bits, you can use the triple right shift:

(0xFFFFFFFF & 0xFFFFFFFE).toString(16) // outputs -2 (not what you want)

((0xFFFFFFFF & 0xFFFFFFFE) >>> 0).toString(16) // outputs fffffffe

JH,

a number is represented by the 64bit floating point data type you mention. When you apply a binary operation on a number, the engine implicitly casts a number to a signed int32. Example:

Math.pow(2, 10) == 1 <<10

while

Math.pow(2, 32) != 1 << 32

That's also why it might be dangerous to replace every Math.floor(x) with x|0 or ~~x. It's correct for an interval of [0, 2^31), but not when you try to cover the full double range.

Robert

You do know that in Javascript "All numbers in Javascript are 64bit (8 bytes) floating point numbers" (http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference)

and "Number is 64-bit floating point, similar to Java's double and Double. There is no integer type" (http://www.crockford.com/javascript/survey.html), right?

So how does your BitSet library creates 32bit integers in javascript?