GetCurrentProcessID to Get Pid#38365
GetCurrentProcessID to Get Pid#38365Anipik wants to merge 2 commits intodotnet:masterfrom Anipik:processId
Conversation
|
Tagging subscribers to this area: @safern, @ViktorHofer |
|
There's no PR description. What problem is this change solving? |
I added the descripition |
| internal static int GetProcessId() | ||
| { | ||
| int id; | ||
| using (var process = Process.GetCurrentProcess()) |
There was a problem hiding this comment.
This might regress perf in this case. Before there was just a single PInvoke, now there is a bit more (allocation, a few more method calls). It also removed caching on Windows. cc @safern who added this originally.
There was a problem hiding this comment.
Likely more than a little. Even without caching, the Process.GetCurrentProcess().Id approach is ~20x slower in throughput (and allocates ~250 bytes) than just making the P/Invoke call (which allocates nothing). Add the caching, and the difference is way larger.
Without a much more convincing argument about why this is important to take, we shouldn't.
There was a problem hiding this comment.
I wonder if we should have CoreLib or Proceess expose a public GetCurrentProcessId() that does the right thing for the platform. I took a quick scan and see GetPid called in 4 places, including CoreLib where it is cached despite forking.
There was a problem hiding this comment.
An Environment.ProcessId property would be reasonable, IMO. A non-trivial number of places use Process just to get at the current pid.
There was a problem hiding this comment.
We can add caching here, but i am not sure about its effect on the unix code path
// Whereas the Win32 implementation caches the GetProcessId result, the Unix
// implementation doesn't so as to avoid problems with fork'd child processes
// ending up returning the same id as the parent.
There was a problem hiding this comment.
That comment seems like it could only be an issue if managed code ran in a fork'd process before exec is called, but if that happened, there are plenty of other things that could fail in much worse ways, e.g. the fork'd process is only going to inherit one thread, which will likely wreak havoc on the GC, cause hangs, etc. In short, managed code is not fork-safe. So, while the comment may have been accurate at the time it was written, I don't believe it is now, and we should be able to cache the current pid.
There was a problem hiding this comment.
Calling Process.GetCurrentProcess just to get current process ID is very unfriendly to IL linking. The transitive closure of the System.Diagnostic.Process is huge (MBs of code). It is not something that can be fixed by caching.
I agree that the right fix would be to expose light weight current process ID property on e.g. Environment.
There was a problem hiding this comment.
This already uses Process in the same type to get ProcessName though that appears to be only called in one of the assemblies (the code actually gets removed from TraceSource by the linker in the shared framework). So this change would regress linked size of TraceSource.
If we wanted to benefit in the TextWriterTraceListener case as well we'd need a method on Environment for ProcessName as well.
I can understand that it makes sense to do this for OOB packages. But why do we want to do this for inbox libraries? The inbox libraries are fundamentally vertical platform-specific build. We can share between different verticals a bit more here and there, but does it really move the needle? |
|
We had ~90 libraries that cross-compile by platform before @Anipik started this work now we are down below 40. Every single one of those needs a code-change when we add a new platform that doesn't derive from an existing one. Reducing this by pushing down PALs helps simplify the code base and makes supporting new platforms easier. This work is happening as part of supporting |
What are the 50 libraries that have had code changes to eliminate cross-compilation? Can you point to some of the PRs? |
|
|
Ah! I misunderstood Eric's comment I think... I thought he was saying we made code changes to ~50 libraries to remove PALs. That made me both curious and very nervous. But that's not what the linked PR is doing; it's really just removing unnecessary splitting. |
|
I think you misunderstood what I was saying. Most didn’t need a change to reduce to platform agnostic: but would require a change when adding a platform since you need to decide what code that platform gets. By reducing you don’t need to make that decision because the library doesn’t have platform specific code. #37623 Is the one adding the new platform so you can see the sort of decisions that need to be made per library (many temporary until tests can be run and real fixes made). I want to call out that I’m not disagreeing with you @stephentoub and @jkotas: this change is not the right tradeoff, but I do think there is general value in reducing the number of libraries that are platform specific even in the shared framework. |
|
I can see the value in forcing function that makes us to add the right lower-level APIs that bridge differences between platforms, like the |
No that doesn't solve the problem, you're imagining some new problem and solving it. I'm not talking about "boilerplate" here. I'm talking about the act of thinking and deciding what needs to be done in a platform specific configuration. You actually exasperate that when you make everything platform specific. If we make the majority of our assemblies platform agnostic then we don't need to think or decide what their implementation should look like on the new platform. It will just work. |
|
#37944 is a good example what adding new platform typically looks like. There are:
I believe that the volume the actual thinking and deciding is constant for inbox assemblies, independent on how many of them are platform specific vs. platform neutral. My point is that we can make the |
Not yet it isn't. We've been peeling off PRs to make this one easier and simpler. We've also been identifying a bunch of places where we should be defining a platform agnostic behavior for libs rather that forcing them to be specific as part of this PR. I think that once it's finally done we'll have a good example of what set of things need to be platform specific. I think that's particularly valuable and shouldn't be lost by making every library platform specific.
I understand your point and I respectfully disagree with it. If we do that we lose the distinction of where our PALs should be. We'd still have to touch every project even so (to fill in the partial classes) but we'd have nothing holding the line on which projects should have a PAL vs not. We also make it harder to develop for these libraries. Imagine that every library in the shared framework needs to build for every platform in order to validate your changes. This makes the VS experience much worse (either by increasing number of build configs or hiding configs from you that might fail). It also introduces the potential for dual-mode library packages to accidently introduce platform specific behavior without putting the same in the package. The choice to make a library platform specific vs not is a significant characteristic of that project, just like the references that project uses, and should be represented explicitly with state in the project file. If you'd like to carry on this discussion, let's do so in a separate issue. |
|
Closing this PR as we agreed that we're not taking the dependency on Process in order to make this platform agnostic. We can revisit once we have the PID API. |
We are trying to reduce the number of cross targeted projects. This is to reduce the work when we add the new platforms like wasm