BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Port Scanning




On 2024-08-06 13:03, Dan Ritter wrote:
> Daniel M Gessel wrote:
>>
>> On 2024-08-06 11:47, Dan Ritter wrote:
>>> Daniel M Gessel wrote:
>>>> On 2024-08-06 00:31, Bill Bogstad wrote:
>>>>> We would have a whole lot fewer moles to whack if we changed our tools.
>>>> In some cases a 5% performance hit is huge - offering up "our programmers
>>>> make mistakes" as a justification is a non-starter.
>>> Remember that:
>>>
>>> - virtual machines impose a penalty of 1% or more -- worse when
>>>     not optimally configured
>>>
>>> - the mitigations for various speculative execution and memory
>>>     hammer attacks can impose 2-30% penalties depending on
>>>     specific programs
>>>
>>> - changes between stable kernel versions can be +/- 15% in some
>>>     cases
>>>
>>> All of those can already be cited as  "our programmers make mistakes".
>> I honestly don't know how the first two address programmer mistakes; can you
>> explain?
> The rise of virtual machines and containers is an admission of
> systemic failure: people gave up on managing dependencies in a
> sensible manner. Rather than have a deployment system which
> produces a working program plus libraries and configuration,
> these systems effectively ship a developer's laptop to the
> cloud.
That system software semantics changes between releases and that there 
isn't a global release cycle that every developer adheres to is out of 
the scope of the responsibilities of individual developers.

> Mitigations for Spectre and Rowhammer are required because we
> persistently run other people's code on our hardware, or if you
> prefer, we keep running our code on other people's hardware and
> pretending that it's our hardware.
I thought these were hardware bugs (at least according to Wikipedia) 
that could be exploited by malicious (not broken) code? Programmers 
tripping over hardware bugs really isn't their error...

Hardware workarounds are invariably painful - but in my experience 
they're usually turned off with "trusted code" if there's a significant 
performance hit.
>> I don't know where you've worked, but I will bet a shiny nickel
>> that 5% drops and 5% improvements happened in different sections on
>> most major releases.
Definitely - but performance was tested on daily builds and drops in key 
software would be raised as block-ship issues. They didn't matter in 
some software, but those probably wouldn't be part of the performance 
test suite.

Hand optimized assembly code may be used in performance critical 
sections - compiler checks won't help