I am currently setting up a website that uses a MySQL database having more than 40 million rows of data. I am using one server for files and separate mysql server. Two servers have 20 cores, 64G ram.
Meanwhile I could see while executing longest SQL queries my MySQL server is using two cores maximum hence it is taking more than 38 seconds to execute my longest query. See the result below.
User@Host: dev_data @ localhost 
Query_time: 38.113460 Lock_time: 0.000514
Rows_sent: 10 Rows_examined: 48683733
How can I configure my MySQL server so that it uses all the 20 cores to handle the query? MySQL version is 5.6.
Atop result below.
tl;dr You can’t force MySQL to use a lot of cores.
40 million (4 crore) rows is a large, but not huge, database for MySQL. It is well within the capability of that software. You don’t have to resort to desperate measures to get MySQL to work with that amount of data. You do have to index it correctly though.
MySQL can use a lot of CPU cores as of recent releases. But it doesn’t use them to accelerate single queries; that sort of workload is not embarrassingly parallel. You can’t get it to use a large number of cores for a single query. It simply doesn’t work that way.
When it uses many cores, it uses them to handle simultaneous workload from many different connections. But of all those connections hit the same tables, it won’t be able to use a lot of cores even if they are available.
At any rate, you should upgrade to version 5.7. It uses multithreading for more things than 5.6.
You will be able to improve performance with appropriate indexing for your query workloads. We can’t advise you about that because you haven’t offered any specifics about existing tables, indexes, queries, or simultaneous access by multiple clients.
Your server, with 20 cores and 64G of RAM, is overprovisioned in cores and RAM for MySQL, and for the sort of workload you mention. You didn’t mention your server’s I/O capabilities. I guess that replacing your mechanical disk drives with SSD drives using PCIe interfaces will make the most difference. But that’s a guess based on little information.
My blunt advice: sell that massive server, or put it into service as a hypervisor host to use for lots of virtual machines for other things. It’s overkill for MySQL.
Get a server with 16Gb and four fast cores. Use the extra money to provision it with as many PCIe SSD drives as you need for your application. Don’t consider purchasing SSD drives smaller than about 0.2 terabyte; they are slower than the larger ones. Index your tables properly for your queries. For tables of the size you mention, don’t worry about core utiization; instead worry about I/O.
You can probably try all this out for a few tens of dollars on a cloud virtual machine (Amazon, Rack Space, Digital Ocean) so you don’t waste money on another overprovisioned server.
In addition to what Ollie says…
“Rows_sent: 10 Rows_examined: 48683733” — Is that a table scan? A giant
GROUP BY? Those are things to work on avoiding in a production server.
For Data Warehousing applications, build and maintain “Summary table(s)” that have subtotals by day (or week) so that the query might hit 400K rows instead of 46M rows. That, alone, will make the query run 10x faster.
If you would like to discuss this further, open a new question with details about that query.
Yet another suggestion: If you have lots of reads, especially long ones, consider a Master-Slave topology. With that, you can scale out reads ‘infinitely’. This scale-out can be independent of the scale-out of web servers as Ollie mentioned. This allows for lots and lots of “simultaneous” web users. (That is how the ‘heavy hitters’ do it.)