Question :
I’m trying to put together a function to convert a size from one type to another (e.g. bytes >> gigabytes).
An extract is below
DROP FUNCTION fc_convSize;
DELIMITER $$
CREATE FUNCTION `fc_convSize`(inVal BIGINT)
RETURNS DECIMAL(25,2)
NO SQL
BEGIN
-- bytes 2 exabytes
SET @inVal := inVal/1024/1024/1024/1024/1024/1024;
RETURN @outVal;
END$$
DELIMITER ;
But it doesn’t work with large numbers. e.g.
mysql> SET @input1 := (SELECT @@global.max_binlog_cache_size);
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT fc_convSize(@input1);
ERROR 1264 (22003): Out of range value for column 'inVal' at row 1
But from what I can tell, max_binlog_cache_size is 184467440737095*47520*
and the maximum for a BIGINT is 184467440737095*51615*
so it should fit, right?
But it gets stranger.
mysql> SET @input1 := (SELECT @@global.max_binlog_cache_size);
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT @input1/1024/1024/1024/1024/1024/1024;
+---------------------------------------+
| @input1/1024/1024/1024/1024/1024/1024 |
+---------------------------------------+
| -0.000000000000003552713679 |
+---------------------------------------+
1 row in set (0.00 sec)
mysql> SET @input2 := (18446744073709547520);
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT @input2/1024/1024/1024/1024/1024/1024;
+---------------------------------------+
| @input2/1024/1024/1024/1024/1024/1024 |
+---------------------------------------+
| -0.000000000000003552713679 |
+---------------------------------------+
1 row in set (0.00 sec)
mysql> SELECT 18446744073709547520/1024/1024/1024/1024/1024/1024;
+----------------------------------------------------+
| 18446744073709547520/1024/1024/1024/1024/1024/1024 |
+----------------------------------------------------+
| 15.999999999999996447286321 |
+----------------------------------------------------+
1 row in set (0.00 sec)
What’s going on???
Answer :
But from what I can tell, max_binlog_cache_size is
184467440737095*47520* and the maximum for a BIGINT is
184467440737095*51615* so it should fit, right?
The maximum value of BIGINT SIGNED is 9223372036854775807.
The maximum value of BIGINT UNSIGNED is 18446744073709551615.
SIGNED is the default behavior. Hence the ERROR 1264 (22003): Out of range value error.
In the subsequent examples, you are expecting the @input1 variable to act like a BIGINT. But in MySQL, the datatype of a session variable depends on the value that is being assigned at runtime. For example:
rds@int[(none)]> set @i := 9223372036854775807;
Query OK, 0 rows affected (0.00 sec)
rds@int[(none)]> select @i + 0.0;
+-----------------------+
| @i + 0.0 |
+-----------------------+
| 9223372036854775807.0 |
+-----------------------+
1 row in set (0.00 sec)
In the above example, the variable @input1 is acting like a BIGINT.
But if we increase the value of @input1 by 1(**beyond the range of **BIGINT SIGNED****) the behavior changes.
rds@int[(none)]> set @i := 9223372036854775808;
Query OK, 0 rows affected (0.01 sec)
rds@int[(none)]> select @i + 0.0;
+------------------------+
| @i + 0.0 |
+------------------------+
| -9223372036854775808.0 |
+------------------------+
1 row in set (0.00 sec)
I think that in this case the session variable is acting like a FLOAT.
As suggested by Rick James, CAST the variable to the desired datatype to avoid such an unpredictable situation.
Do you have to declare the argument as a bigint?
CREATE OR REPLACE FUNCTION fc_convSize(inVal decimal(40,0))
RETURNS DECIMAL(25,2)
NO SQL
BEGIN
return inVal/1024/1024/1024/1024/1024/1024;
END$$
select fc_convSize(18446744073709547520000000123333232323);
+-----------------------------------------------------+
| fc_convSize(18446744073709547520000000123333232323) |
+-----------------------------------------------------+
| 15999999999999996447.29 |
+-----------------------------------------------------+
1 row in set, 1 warning (0.00 sec)
However:
select fc_convSize(10000*18446744073709547520);
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in '(10000 * 18446744073709547520)'
MySQL tries to fit the result of the multiplication in a bigint which fails. You can cast one of the arguments to void this:
select fc_convSize(10000*cast(18446744073709547520 as decimal(40,0)));
+----------------------------------------------------------------+
| fc_convSize(10000*cast(18446744073709547520 as decimal(40,0))) |
+----------------------------------------------------------------+
| 160000.00 |
+----------------------------------------------------------------+
1 row in set, 1 warning (0.00 sec)
Note that I just grabbed decimal(40,0) out of thin air, just pick something that is large enough for your needs.
Bytes to gigabytes on only 3 divides:
/1024/1024/1024
A simpler (and faster) way to do this
SET @input1 := (SELECT @@global.max_binlog_cache_size);
is
SELECT @input1 := @@global.max_binlog_cache_size;
The problem with SET @input2 := (18446744073709547520);
is that the number is bigger than can fit in a BIGINT SIGNED
. So it (somewhat erroneously) stores the 64 bits blindly; this turns into a negative number.
Consider CAST
:
SELECT CAST(@@global.max_binlog_cache_size AS UNSIGNED)/1024/1024/1024;
--> 17179869183.999996185303