Normally, what some of us do is to just host the database and web-server inside 1 machine. Meanwhile we also may focus a machine with high RAM just for the remote database on LAN and another machine for web-server.
How does it affect performance for I/O speed to high load and busy server and/or anything that might need to be put under consideration for database read/writes rate. Does it efficient and optimizing the resource well if using 2 separate machine? Or any better suggestion for high performance database optimized with the resource? How about the security?
edit: 2 server with 4 thread processor (1 Web-server & 1 Database) vs 1 Server with 4 thread processor. Does it matter (always in term of performance)? Also how does it apply like Cloud Hosting performance? It does share database too among all their cluster to distribute resources and high availability am I wrong?
If I understand correctly you are asking:
“Which would be better and faster, to run everything on 1 machine and have clients directly connect to the server to remove networking as a potential bottleneck, or having clients access the server from another machine by moving the web services and log in to that other machine?”
In I.T. we usually try to squeeze out all the performance we can for every dollar thus I could see why you would ask this, but this is a dangerous path that is being entertained.
Firstly, what you’re really talking about is resource utilization and where does the bottleneck lie. If your bottleneck is in fact networkIO, then this would help performance but you run A LOT of risks which are outlined below. However, if NetworkIO REALLY is the bottleneck, you have to fix that. Use NOCOUNT ON in your code, verify the network is stable. Rarely is network_io the bottleneck compared to diskIO or RAM.
What DB Engine are you using? If you’re using MS SQL Server you could do a simple
SELECT * FROM SYS.DM_OS_WAIT_STATS ORDER BY 3 DESC
How high does network IO show up on the list? Depending on your version you should get 450-500 rows or so returned. If NetworkIO isn’t even in the top 50 then you probably shouldn’t entertain this idea.
Having Everyone Directly Connect On 1 Machine And Run DB/Web Services Locally:
In a world where I.T. best practices trump all else, then you would without a doubt separate client access from the main machine. There is just way too many things that could go wrong. For example, if you’re using SQL Server and SSMS, SSMS itself will EASILY eat up 500MB per instance. That means every client who connects and opens up SSMS will eat up RAM, so if you have 20 people connecting, you just ate up 10GB ram.
Also while devs might be great at development, but unless you have a strong DevOps environment they cannot be trusted in prod. They are under the gun to get things to run fast and will just hack around. When you remove devs from prod all kinds of mystery issues immediately leave.
Security would be another risk. In many I.T. environments I’ve consulted at, security was an afterthought. I wouldn’t be surprised if devs and users would have access to install files, create shares, change settings, etc. on your prod server. This is horrible.
How about if they start saving large files on your drives and you run out of disk space?
Having everyone connect from a web server:
In this scenario, users are still connecting to the web server directly which means they could stop web services, however those could typically be recovered fast and it won’t affect the overall health of the DB server. You isolate the DB from everyone and this is a much safer alternative.
If you host your web and database server on one machine, you best outcome depends on the hardware configuration. Along with high amounts of RAM, following items could be assigned their own hard drive:
- Operating System
- Web Server
- Database – Log Files
- Database – Rollback Segments
- Database – Temporary Storage
A localhost does minimize network traffic, but you also have to configure your server properly for the best performance.