Filter by Category




Does foundry make money?

0
(0)

(ball valve)

(gate valve)

Ball valves have the advantage of quick opening and closing.

Gate valve has excellent airtightness

That is

GAAFET has superior confidentiality compared to FinFET.

Prevents leakage current and provides stable mid to high clock speeds with low power consumption.

There are structural advantages that can be maintained

The downside is

It has the characteristics of a ball valve that opens and closes (switching) quickly.

The maximum clock maintenance power is weaker compared to FinFET.

So far, FinFET from a cost-utility perspective

Application was postponed because upgrading was more economical.

In 2 nanoscale processes where the FinFET structure is limited,

GAAFET adoption becomes necessary

Especially Sampa

FinFET lags behind TSMC in process maturity

Because the amount of leakage was greater and suffered from derived problems,

Proactive introduction to take advantage of attractive GAA

The Exi 2500 will be mass produced using the SF3 process.

(Exynos 2400)

(Exynos 2500)

However, perhaps it is because it is the first mass production with a GAA structure.

Distinct from the SF4 process Exi 2400 with FinFET structure

Couldn’t find any difference

However, in the soon-to-be-released SF2 process Exi 2600,

Remove the little core and use 6 middle cores at 2.75GHz.

The advantages of the GAA structure have finally been revealed through reinforcement.

I judge

Little Core is similar to Intel’s E (Efficiency) Core.

Scheduling issues and resulting severe reactivity

It was the cause of the degradation but has now been ruled out

Although the SF2 process is superior to TSMC’s FinFET N3 process,

I can’t say it surpassed it, but very slightly.

It looks like we’re catching up.

The perceived performance was lower than the benchmark score.

Because it is fundamentally different from previous generations,

It creates anticipation

In addition, at the end of July last year, Tesla’s Sampha

The long-term contract raises expectations for the commercialization of the SF2 process.

It raises it even higher

Tesla must have been provided with process characterization data

Professional AI accelerators such as automotive AI, TPU, and NPU are

FP64 (double-precision floating point arithmetic) has high processing costs and low utility.

It is designed to minimize the instruction unit.

In this case, low-power mid- to high-clock processors show excellent performance ratio.

Elon Musk would have paid attention to this

I guess this led to a long-term contract with Sampa.

If the characteristic data Musk received was technically

If I’m honest, on the Galaxy S26 with Exi 2600

Sufficiently high performance ratio and stable operation will be confirmed.

Most likely

Rather than worrying about the upcoming Exi 2600 judgment time

The reason for waiting in anticipation

1. Core composition of Exi 2600

2. Tesla’s long-term contract

3. Big global funds bet

If this becomes a reality, Sampa will collaborate with TSMC’s HPC

Specialized AI accelerators separate from the general purpose AI accelerator market

It will open the door to market fabless customers

As a support for our semiconductor industry and economy,

We just hope that Samsung Foundry will become established.

This is not an article predicting a further rise in Samjeon’s stock price.

For reference

AI calculations are performed on the x, y, and z axes of three-dimensional space.

Extended abstraction of vector (magnitude and direction) expression and calculation

It’s a concept, so you can easily force it.

For this reason, GPUs specialized in vector operations are

It became the basis for AI accelerator

More than tens of billions of parameters

Each coordinate value on the corresponding axis becomes a high (tens of billions) dimension.

Forms a virtual vector

Cloud AI companies have tens of billions of parameters.

Because of deep learning models, high-capacity, high-bandwidth

I am suffering from purchasing memory at high prices.

Even Cupertino’s uneaten apples have on-device AI

Because of this, the Ram Cruge strategy suffered a setback.

Naturally, the AI industry is drastically reducing the number of parameters.

We are doing our best to develop lightweight models.

I’m looking forward to it being completed as soon as possible.

I don’t know when

The moment the lightweight model AI service becomes visible

The fact that the memory market may cool down is

You must always remember

Unlike smartphones with mobile DRAM, HBM

The replacement cycle of the installed AI server is relatively long.

If there is an oversupply, the recession could be much more severe.

It should also be borne in mind that there is

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Leave a Comment