Jump to content

Hervé

Administrators
  • Posts

    9905
  • Joined

  • Last visited

  • Days Won

    548

Posts posted by Hervé

  1. For proper keyboard mappings you can use a combination of DoubleCommand (you can use that to program Win key as Apple key for instance) and Ukele Logitech keyboard layouts (useful for mapping of special keys like those with @ or | or \ symbols). DoubleCommand is an application that installs a pref pane in System Preferences, for Ukele, just copy the layout files (usually 2) to /Library/keyboard Layouts.

  2. Look at the DSDT/SSDT patching thread below and follow the process. Dinesh might be able to help you there when he has time; if you want it quick, don't hesitate to make him a little donation.

    https://osxlatitude.com/index.php?/topic/1945-dsdtssdt-patching/

     

    There used to be a trick on the D630 that could work around the black screen issue if you had the wrong DSDT, but I only used that as a beginner and with Snow Leopard + ModCD. Basically, the trick was to boot with an external display hooked to the laptop and on which you force display using Fn+F8 at startup. Upon boot completion and at activation of OS X desktop, the display would return to built-in LCD. Give that a go, you never know...

  3. I don't know the DW1397, only the DW1390 (which is supported OOB) and the DW1395 (which is supported by patched IO80211Family kext or BCM43xxFamily kext). Have a look at your System Preferences hardware list to check if the card is reported there. If it is, you'll see the PCI vendor id + device id which can be looked for in the kext to ascertain hardware support.

     

    You could also open the laptop to have a look at the car itself and see what chip is on it, what's written on the label.

  4. The GMA950 kext will need to be patched to take your Q965/Q963 device id into account, you can't just copy it to /E/E and expect graphics acceleration + resolution to work. By default, it only supports "regular" GMA950. The Intel framebuffer will probably require the same. I'll have a look and get back to you.

  5. GMA950 is graphics controller of another chipset (GM945). According to PCI database website, device id 0x2992 is chipset Q965/Q963 which integrates graphics + memory controller. Q965 supports integrated graphics + PCIe x16 discrete add-on graphics card, Q963 only integrated graphics, so look at your motherboard for a PCIe x16 expansion slot to ascertain whether you have Q965 or just Q963 chipset.

     

    If this chipset were unsupported by Mac OS X, your only hope would be to have the Q965 which would allow you to add a supported PCIe x16 graphics card. Should it be Q963, you'd be screwed!

     

    You can probably force max resolution through the Chameleon boot plist in /Extra: tick the Graphics Mode box and select the maximum resolution supported by your display. Try and select Graphics Enabler too (if that proved problematic to boot, you can then manually use boot option 'GraphicsEnabler=No' afterwards then tick that option off). If that works, then there's a good chance we can find the correct FrameBuffer.

     

    If the onboard Q965/Q963 video is indeed GMA3000, then I understand that GMA950 kexts would support it (GMA3000 is a direct evolution of GMA900/950). I reckon it will require to patch the kext, but it should not be too complicated now that we have the PCI device id.

  6. Copy IOATAFamily kext to /E/E of your USB installer and re-run myFix.

     

    Do a quick search on the forum, this issue has been mentioned several times these last few days. I uploaded the kext too.

  7. That file can normally only work on Intel GMA950 graphics controller. Are you using an add-on PCIe graphics card or the integrated GMA3000 controller?

     

    Try and give us the PCI vendor id/device id for your video controller: Apple Menu -> About this Mac -> More Info -> System Report -> Equipment/Hardware -> PCI and/or Video cards/monitors.

  8. It is your CPU, you can take my word for it.

     

    If any point had to be even stronger: just ran Xbench Quartz test on my D630 T7500 GMA X3100 under ML 10.8.2. There's no GPU support, no graphics acceleration at all, yet the test scored 128!!!  :lol:

    XBench.jpg

    Just get a T7500 or better (T8xxx, T9xxx with FSB800) and then you'll see a real difference. The T7100 appears just too low-spec for the D630 laptop.

  9. I'd simply upgrade the CPU if I were you, that'll be a lot quicker and probably much cheaper to do... Get a faster C2D with bigger cache.

     

    To prove my point:

    a. D620, C2D T7200 2.0GHz FSB667, 2GB DDR2-667, 320Go HDD 7200rpm, nVidia NVS 110m with 1440x900 LCD, Lion 10.7.5

    b. D630, C2D T7500 2.2GHz FSB800, 4GB DDR2-800, 80Go HDD 7200rpm, GMA X3100 with 1440x900 LCD, Lion 10.7.5

    c. D630, C2D T7500 2.2GHz FSB800, 4GB DDR2-800, 160Go SSD, nVidia 135m with 1440x900 LCD, ML 10.8.2

     

    -> Xbench Quartz Graphic tests on all 3 running PState on-demand, no app running and nothing opened on desktop

     

    a. 1) 89.94

    a. 2) 90.29, a few minutes later

    a. 3) 106.65, a few more minutes later

    a. 4) 100.35, even more minutes later

     

    b. 1) 81.78

    b. 2) 65.43, a few minutes later

    b. 3) 71.03, a few more minutes later

    b. 4) 68.84, even more minutes later

     

    c. 1) 43.38 (ok, I forgot to close utorrent in the background)

    c. 2) 212.39 a few minutes later (utorrent being closed)

    c. 3) 87.35 a few more minutes later

    c. 4) 74.97, even more minutes later

    c. 5) 216.22, immediately afterwards

     

    Then ran Quartz + OpenGL tests simultaneously on all 3:

    a. -> 64.09

    b. -> 37.36

    c. -> 71.47

     

    So, go figure! These benchmarks don't mean much to me I'm afraid...

     

    All three laptops run fine, with D620 nVidia definitely a bit slower than the D630 X3100 under the same version of Lion. The D630 nVidia remains the best performer to me, although I don't really feel much of a difference between that one and its X3100 counterpart, the graphics rendering is however much nicer, a lot of that having to do with ML as well...

  10. No, I have slightly higher values. But I have a different (better) CPU and I don't believe XBench actually evaluates the GPU. To me it only tests CPU.

     

    I believe your feeling of lower performance is simply due to having the low-end CPU of the T7xxx family. Your RAM might be slower in the D630 too (timings, not just bus speed).

     

    Also, don't make the mistake of thinking that an nVidia GPU should just be quicker/faster than the Intel integrated GMA X3100. It may be quicker, it may not be (I don't know the inner frequency of the nVidia 135m), but it definitely has more graphical capabilities than its Intel counterparts. That will be felt in graphics intensive application (games, video, CAD, all that kind of stuff), not really on basic/regular desktop windowing or browsing.

  11. Yes, the boot flags and/or options that you manually specify at startup override the Chameleon boot plist. Watch with very recent versions of Chameleon as they would appear to memorize manual boot flags and options afterwards...

  12. The lowest clockspeeds of the CPU that PState tries to set the CPU to is 595, does not exist for the CPU. I don't think 595mhz is a real speedstep supported by a T7x00. Anyways, its still strange that this only happens on my D630 and not D820. No matter how low I set the speed there, the GUI benches are always fast. Thats why I run those, I don't really care if it uses GPU or CPU, its exactly whats causing the sluggishness of the GUI on the D630.

     

    The lowest clockspeed Windows/Unix will try on my T7100 is 1ghz, and then one or two steps more till the max of 1.8ghz. But never as low as 595. If only PState would use the real speedstep-steps in and not something it gets from I don't know where.. Still though, this shouldn't happen. Very annoying.

     

    I will do the tests later. All PState really needs to get is a proper list of steps/frequencies supported by the actual CPU or a way to add/edit those values to the real values of the CPU. Finding documentation about the Info.plist and how to give it your own proper CPU speeds is unfindable to me.

     

    [...] But I really think this is caused by PState running my CPU too slow, like the lowest speed should be something like 1.09ghz and not 595mhz.

    Oh, you need to get to know Intel CPUs a wee bit better...

     

    PState does show/display the correct set of frequencies for your T7100, don't get mistaken. T7xxx CPUs do operate from a range of 600MHz to 2.8GHz for the fastest of the range in steps of 200MHz (ie. steps of CPU bus speed). It may not show up in Linux, but it certainly would in Windows. Look up the Intel datasheet if you need details or want to get familiar with IDA (a kinda of turbo/boost mode) or LFM/SuperLFM modes (low frequencies). Basically, a T7xxx can run at speeds of variable FID multipler x CPU bus speed, the lowest SuperLFM multiplier being x3 and the IDA multipler being x12 -> this gives a range from 600MHz to 1.8-2.8GHz... That's EIST for you (Enhanced Intel SpeedStep).

     

    Now sometimes, the CPU FSB (which is quad-pumped Bus Speed) does not operate at the exact max. speed, but slightly below, so you could expect an 800MHz FSB (4 x 200MHz) CPU to actually run at 798MHz or 799MHz and, as such, it may report a frequency of say 595MHz instead of 600MHz or 2189MHz instead of 2200MHz. That's just perfectly normal.

     

    Use tools like RmClock and CPUID in Windows and you'll see.

    RmClock.jpg T7500_600MHz.jpg T7500_800MHz.jpg

  13. Mmm, I don't understand most of all this, especially the part about the lowest frequencies "which don't really exist"...  :wacko:

     

    DId you check your BIOS settings and compare to those listed in dedicated thread?

     

    Also, can you post a screendump of your Cham boot plist (opened with Cham wizard) and tell us what SMBIOS settings you chose?

     

    You should also question the true meaning of benchmarking tools results: you're running graphics tests and the higher the CPU frequency, the better the results -> where's GPU involvement in there? I don't know how the benchmarking is done, but it seems to me that only CPU is tested and, should this be the case, it's not a true reflection of the computer's real graphics capabilities and/or performance.

     

    I'd suggest you run 2 separate tests of GeekBench in the same way as well: first with on-demand performance then with frequency set to highest. I'd be interested to hear which one is better...

     

    I guess that if you feel your laptop runs faster without Emulated SpeedStep, you should remove it.

  14. Only the VGA port is supported, no DVI. Simply connect the monitor, it should be detected. If not, just go to the Monitor Pref Pane and click "Detect Monitors" button.

     

    Do not select port mirroring, it's not working properly and you'd get both screens going all garbled.

  15. We should be able to get past that issue by finding the right Chameleon option or flag. I can suggest to try various combination of the following boot options:

    PciRoot=1 USBBusFix=No/Yes npci=0x2000/0x3000 GraphicsEnabler=No/Yes

     

    Use them along -v flag to monitor startup messages.

     

    Did you go through the various Optiplex 745 threads that were created last year?

  16. On a side note, would it be possible for EDP to include some fix for the hard disk ticking noise created by OSX's silly way of enforcing the max power savings causing the heads to park every 10-20 seconds making this ticking noise. I use hdapm for it myself, but I don't know if they allow redistribution or if you guys even want such thing in the EDP. It could go in the fixes menu? ;)

    ??? I mustn't have the same system...  :blink:

  17. Oh, you really chose complication... Forget any decent graphics under ML on that model: there is no graphics support, there are no kexts at all, there is no acceleration, nada, ziltch.

     

    No, you can't add an nVidia GPU, you can only buy/get a D630 fitted with one at factory.

    No, there is no EDP support for ML on that model, hence why you can't run EDP on it under ML.

     

    Browsing is slow, video unwatchable, opening/minimising/closing windows is slow, there are video artifacts most of the time (minor and sometimes major), games unplayable, etc, etc. Graphics painfully run off the CPU and as such affect overall system performance. To me, it's just not sustainable to run unless you like to say you have ML on the system but not actually use it; I don't see the point, but maybe that's just me... Again, I can only recommend to stick to Lion.

     

    I've uploaded the complete necessary Extra folder. Use it as indicated if you're a non-believer, but please, no more requests for ML on that D630 X3100 system. It's not supported and does not run properly, plain and simple.

×
×
  • Create New...