The charCodeAt() method returns the Unicode of the character at the specified index in the string. An index of the first character is 0, the second character 1, and so on.
The syntax of String charCodeAt() function is following.
The index parameter is required, and a number representing the index of the character you want to return.
The charCodeAt method returns “NaN” if there is no character at the specified index, or if the index is less than “0”.
// app.js let str = "HELLO KRUNAL"; let op = str.charCodeAt(str.length - 1); console.log(op);
The output is the following.
String charCodeAt() method returns an integer between 0 and 65535 representing the UTF-16 code unit at the given index.
The UTF-16 code unit matches a Unicode code point for code points that can be described in the single UTF-16 code unit.
If a Unicode code point cannot be represented in the single UTF-16 code unit (because its value is greater than 0xFFFF), then the coding unit returned will be the first part of a surrogate pair for the code point.
Note that String charCodeAt() will always return the value that is less than 65536. It is because the higher code points are represented by the pair of (lower valued) “surrogate” pseudo-characters which are used to comprise a real character.
See the following code.
// app.js let data = "KRUNAL".charCodeAt("KRUNAL".length); console.log(data);
In the above code, the output will be a NaN.
See the below code.
// app.js let op = "KRUNAL".charCodeAt(0); console.log(op);
See the output.
We have already seen the String charAt() function on this blog.