What is the Difference Between Physical Sockets, Physical Cores, and Logical Cores?
A rack-mounted server that has two slots for CPUs. Typically two quad-core or 6-core CPUs are used, providing a total of 8 to 12 processing cores per server. What does "socket" mean for a server? I've had several documentation about servers where the number of socket was specified (1-socket, 2-socket). I wondered what it could be because there was also the number of processor and the number of core.
In the market for a dedicated or bare metal server? There is an abundance of configurations to choose from. The backbone of any server is the number of CPUs that will power it, as well as the actual model and the type of the CPU.
Servrr that point, you add the needed amount of RAM, storage and other options that your use case requires. After reading this article, socekt should be able to understand the differences between a single processor and a dual processor server. If you are planning to build a bare metal environment for your workload, one of the how to cook mackerel in microwave is whether to go for a single or dual processor setup.
This article should steer you towards making a proper decision for your future infrastructure needs. Back in the day when computers started entering every aspect of our lives, we could not even imagine a multi-core CPU.
It was a battle of high CPU core clock speeds. The higher the clock z, the faster a CPU could process information. When single-core CPUs were no longer sufficient, manufacturers started developing chips with multiple cores and threads. Soon enough, we started seeing servers with multiple CPUs on one motherboard. But what is the difference what is a two socket server a CPU, a core, and a thread?
Read along for a brief overview. Single core CPUs were able to handle only one set of instructions at a time. Virtually all modern CPUs contain multiple cores now. This enables the execution of multiple tasks at the same time. A Core is a physical part of a CPU.
Cores act like processors within a single CPU chip. The more cores a CPU has, the more tasks it can perform simultaneously. One core can perform one task at a time while other cores handle other tasks twp system assigns. This way, the overall performance is substantially hwo when whar to old single-core CPUs. There are also logical cores that function as separate threads within a core.
While they boost performance, logical cores are not a match for physical cores. If a CPU has six cores with two threads per core, that means there are twelve what is a two socket server for information to be processed.
The main difference between threads and physical cores is that a yellow question mark in the device manager indicates what threads cannot operate in parallel. While two physical cores can simultaneously perform two tasks, one core alternates how to cook green button squash the threads.
This happens fast so that it appears that true multitasking takes place. Single processor servers run on a motherboard with one socket for a CPU. This means that the highest core-count CPU available on how to fix the cd drive on a computer market determines the maximum core-count per sovket.
With 6 cores clocked at 3. The advance of CPU technology allowed single processor servers to handle intensive workloads. This strictly depends on the model of the How to put bookmarks in alphabetical order that powers the servers as well as other components, such as the amount of RAM.
Since the discrepancy between the single processor server configurations can be significant, it is useful to divide them into a few categories. This is by no means an official categorization of servers. It is simply a high-level classification so you can get a general idea of how we can use single processor servers. With low-end entry-level single processor servers, you can expect to build a general application server for a smaller organization.
This includes a mail server for a dozen, or so, active employees. Learn more about application sserver by referring to our article Web Servers vs Applications Servers. Cost-effective single processor servers can x machines powerful enough for a development and test environment for your team of programmers. In this segment, you can also expect to set up your own DNS server. Most modern entry-level servers support error-correcting code ECC twk. It corrects emerging data twk, prevents potential system crashes and helps to keep the socjet running sserver the clock.
The single processor server lineup in the middle segment is also diverse. Additionally, mid-range machines are a good fit for a moderate volume webshop or smaller online games server. Organizations can also deploy these machines as collaboration servers for fluent data exchange between different sectors.
Dhat data may change at the same time in different locations, collaboration servers keep track of the changes and deal with proper servsr. There are many different applications for collaborative servers ranging from interactive 3D experiences to project management tools.
If the budget allows for a top-spec single processor server, you can create a high core-count machine for more intensive workloads. Some of those applications include certain scientific simulations and statistical computations.
Other than that, large volume websites and online stores can effectively run on these robust servers. You can also create a smaller virtual environment and make a multi-purpose server using one unit. High-end servers are also suitable for potential scaling and werver server clusters for intensive workloads.
The most apparent distinction between single and dual processor servers is that the motherboard has two CPU sockets instead of one.
This is followed by additional benefits such as the massive amount of PCI lanes, two separate sets of cache memory and two sets of RAM slots. This rarely happens since dual processor servers always tao both q occupied. One thing to bear in mind about dual processor servers is the presence of a necessary latency in such systems.
This refers to the compute tasks that require the same data sets. To efficiently share the available resources and avoid interrupting each other, there is a need for NUMA non-uniform memory access.
Whatt helps with assigning available memory and devices to each CPU making what is a two socket server latency times as low as possible. But, in the workloads intended for these us, this is not an issue. Dual processor servers and multiprocessor systems, in general, are the best options for space-restricted environments. When a business requires as much compute power as sockett in a single unit, they need to use multi-socket setups to fit a large number of servers how to become a field agent in the cia a constricted space.
Quite often dual processor servers contain those top of the line processor chips. This makes them suitable for virtually every market segment and business use case. Note that typical small business applications will not benefit from the high core-count. Where these servers really shine is in multi-threaded CPU intensive applications such as scientific high precision computations and simulations. The same goes for machine and deep learningrender farms and similar HPC deployments rwo an extreme amount of CPU calculation takes place.
Environments ie a large database tasked with numerous simultaneous queries take advantage of servers powered by two CPUs and as many cores as possible. The more cores are available, the more database tasks a system can handle. Dual processor servers can even handle multiple databases on a single machine due to the sheer number of processing power. These servers shine when they serve as the basis for a virtual environment or the backbone how to get your boobs to grow faster a server cluster.
With up to 56 cores and double the threads, you can even assign physical cores to your virtual machines for better performance and stability. You may have noticed that we did not create a classification of how to test lan throughput CPU servers in different segments.
The main reason is if you are in the market for a dual CPU server, you are already deep in the high segment of the computing world. Still, providers have outlet offers for dual CPU machines where you can lease a bare metal server without breaking the bank. As it is usually the case, the more, the better. Higher core-count machines certainly outperform those servers with six or eight cores and a single CPU chip.
However, not everything is so simple. Js dual CPU setups pack enormous core counts and outshine single processor servers by a large margin, some tests have shown only a marginal performance increase over single CPU configurations with similar core-count and clock speeds per chip. This refers to the circumstances where two CPUs worked on the same data at the same time. On the other hand, we see immense performance boosts in dual processor servers when the workload sevrer optimized for setups like these.
This is especially true what causes pain in the right big toe CPUs carry out intensive multi-threaded tasks. One of them is abstracting the resources into virtual ie that work on separate things at the same time. The sheer processor slcket and core-count is not always detrimental.
Dual processor servers support much more RAM than it is sockef case with single srver servers. The new kind of memory will boost performance for data-hungry applications while keeping a high level of security.
There is no magic formula on how to determine if you need a low-end single processor server or a dual processor monster. There are multiple factors that play sefver major role in this decision. This also depends on whether you want to lease a server or buy one. The reason is the amount of work involved in setting up the proper air conditioning, power, cabling and everything else you need to run a stable data center.
Here are some of the guidelines for determining the right bare metal server for your business :. With massive core-count CPU chips, it may seem that dual processor servers are best suited for enterprise environments and data centers.
Those buildings need to house as many cores per unit socke possible to save space. High-performance servers underpin successful business operations and drive forward the scientific research and development. That is why it is essential to socmet the proper server for your IT infrastructure. If you need any help in making this decision, you may want to turn to professionals. You can also contact phoenixNAP experts directly for assistance in choosing the right server for your workload.
Backup and RecoveryBare Metal Servers.
Couldn't find what you're looking for? Log in / Sign up to ask a question now
Feb 20, · Dual Processor Servers – Benefits and Facts The most apparent distinction between single and dual processor servers is that the motherboard has two CPU sockets instead of one. This is followed by additional benefits such as the massive amount of PCI lanes, two separate sets of cache memory and two sets of RAM slots. Here's what (someone at) Intel thought in , with some caveats: "When contemplating the use of one socket with four cores versus two sockets populated with two cores each, resulting in an equal number of processor cores, the expected results aren’t so obvious. Definition: A socket is one endpoint of a two-way communication link between two programs running on the network. A socket is bound to a port number so that the TCP layer can identify the application that data is destined to be sent to. An endpoint is a .
Whenever we do a 4P server review, such as our recent Supermicro P-TN8R Review , we inevitably get the question: Why are all servers not 4-socket servers? In each review, we try to cover the pros and cons, but instead, we wanted to get an article out discussing some of the factors. This is not an exhaustive list, but it frames some of the dynamics in the market. We also want to note that this is being done in Q1 , and subsequent CPU generations may change how these are viewed.
We took some video of this discussion. If you are on the go and prefer listening to learn about the market dynamics, this should help. Check that out here:. In our discussion, we are going to talk about some of the reasons that 4-socket servers can make a lot of sense. We will then discuss why they are still a relatively small part of the market by unit volume.
Finally, we are going to discuss some of the regional variations. Perhaps the biggest reason to go to 4-socket servers is to scale-up node sizes. This means that one has more memory and more CPU power in a node. As a result, one can scale-up problem sizes without having to move off the node and onto network fabrics.
Even for virtualized servers, larger nodes mean that one can get higher virtual machine density with fewer stranded resources. One can also get more expansion per node. From a cost perspective, there are a number of factors that can help make 4-socket servers sensible.
For example, there are many infrastructure pieces based on a per-node model where costs can be reduced by consolidating into larger nodes. These types of costs include management software as well as simple items such as fewer management switch ports.
Cost savings extend to the hardware side. Many servers can utilize a single NIC per node. If this is the case, doubling the capacity of the node essentially means half the number of NICs. Inside the node itself, there are a number of cost efficiencies as well.
For one node per chassis systems, the cost of the chassis and two power supplies are split across twice the amount of CPU sockets and memory which effectively lowers overhead costs. One needs a boot drive or set of boot drives no matter the size of the node so this also scales with nodes, not sockets. The easiest way to think about these cost savings are that there are server infrastructure costs that still scale per node rather than per socket or per core.
Running each node at higher power levels from utilizing more components means power supplies run at higher-ends of their operating ranges. That can usually yield higher power supply efficiency further lowering TCO. Aside from the benefits of scaling up a node and lower per socket infrastructure costs, there is another benefit, upgrades. Some fairly large service providers had a strategy that they would deploy 4-socket servers with two CPUs then upgrade them mid-lifecycle to four CPUs getting twice the performance or more.
They would not have to buy more servers and instead, just add CPUs. Many of these upgrade projects were put off indefinitely, but that is a reason some have gone 4-socket. Despite the advantages laid out above, as well as some others, 4-socket servers are far from the dominant architecture today.
Instead, 2-socket servers are dominant in terms of unit volume. The obvious answer here is cost. Each node simply costs significantly more even though one may be able to get per-node cost savings.
For smaller installations of a rack or less, many organizations simply want more nodes for smaller failure domains. With fewer more costly nodes a node that is offline for any reason represents a higher proportional cost of offline resources in small installations. As we saw with Licenseageddon Rages as VMware Overhauls Per-Socket Licensing , modern software is increasingly becoming licensed per-core rather than per-socket or per-node. In the enterprise space where software costs often dwarf hardware costs, getting more performance per core is more important than driving scale-up performing in a node.
When it comes to performance, one of the biggest impacts comes from UPI link bandwidth. In dual-socket servers, server designers generally elect 2 or 3 UPI links per. Often in lower-cost servers, you see 2 UPI links to save on motherboard cost and power consumption. The Supermicro BigTwin review we did has 3 UPI links even in dense a dense 2U 4-node configuration which required design work to provide the higher-performance links. If you have very little data traversing socket infrastructure, then this is OK.
For many applications, this is a significantly worse version of the 4-socket topology since it can mean making two hops instead of one. That was a major innovation with the Volta series of GPUs because more links enable better topologies.
Rounding out the Xeon discussion we still need to address the Platinum and the Xeon Bronze and Silver. The Xeon Platinum series, is not socketed and is limited to two packages. The Intel Xeon Bronze and Silver lines cannot do 4-socket so one is left to single and dual-socket applications.
From a density standpoint, in the best case these 4-socket servers, if they were a single U each, would have the same density as a 2U 4-node server. Most 4-socket servers are instead 2U-4U servers so they effectively are no better than half of the modern 2-socket density.
Complexity is also higher in 4-socket servers. More complex PCB and integration make it slightly harder to design systems. As a result, we see fewer vendors make 4-socket servers. This is further reinforced by lower volumes which means there is a volume and complexity barrier to entering the market. Another aspect that is often overlooked is what happens with 4-socket boot times. If you ever have to reboot a four-socket server, they can be very slow.
They can take minutes to POST and boot. Years ago we raced two 4-socket servers from Dell and Supermicro and they took minutes to boot. This is actually a big reason we do not use 4P servers in the STH hosting infrastructure since they take so long to come online. The world of 4-socket and even 8-socket servers have a fun quirk and that is a numeric regional preference. This is similar to why in the US you do not often see elevators with 13 th floors and airplanes with 13 th rows of seats.
If you in Asia, there are many buildings without 4 th floors. That is just a regional difference. From what we have heard from both OEMs and Intel, the vast majority of 8-socket servers today are sold in China. The most common reason for this that we hear is not because of the scale-up benefits. Instead, it is because of the numerology of the 8. It just so happens that 8 in China is a lucky number, much like 7 is considered by many in the US to be a lucky number.
That is an interesting tidbit in the event you thought that all server purchasing was done solely on completely rational performance merits. Humans still run the processes, and therefore things like this are more common than you think. This is relatively a non-factor at this point. There are plenty of applications that one wants to scale-up a single node to 4-sockets.
Even if you do not have an application like that, you can still see a lot of cost savings by moving to 4-socket infrastructure, both in the initial purchase and in some of the ongoing operational costs. You also get features like more PCIe expansion per node. There are some negatives to going 4-socket. Still, if you are aware of the pros and cons of four-socket servers, they can make a lot of sense for some organizations.
There are probably segments of the market out there that could or should be using four-socket servers and are not at this point. We have pooled HPE Z stations in our trading room for silly good improvements in performance while maintaining near silence.
I have just begun reading about sound cancellation enclosures such as from Silentium, who claim 32dBa reduction. This would let us run standard TOR and hot aisle servers and reduce or eliminate our physical separation for soundproofing notably. One kilowatt hour or two, luv? Extra TDP with your bun today, young man, or just a little lump of core turbo, dear?
I remember in electronics classes, edge connections were frowned upon from a great and intolerant height by our prof…. I like that the white dude is the one bringing this perspective. Save my name, email, and website in this browser for the next time I comment. Sign me up for the STH newsletter! This site uses Akismet to reduce spam. Learn how your comment data is processed. Sign in. Log into your account. Forgot your password? Password recovery. Recover your password. Saturday, April 24, Get help.
Is the 80 core Ampere benefiting from this lucky phonetic tradition as well? Is this a joke? I am from China. Great write-up.