WAMP Error – Fatal error: Unknown: Failed opening required ‘C:/….’ (include_path=’.;c:phpincludes’) in Unknown on line 0

I’ve been searching around for hours now and can’t seem to find any solutions.
I’ve been using WAMP Server on window for forever now, but suddenly last night I get the errors

Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0.
Fatal error: Unknown: Failed opening required ‘C:/Users/User/www/OMG/index.php’ (include_path=’.;c:phpincludes’) in Unknown on line 0

now to test this out, I’ve only created an index.php file with the following

<?php echo "Hello"; ?>

I still receive the errors. It doesn’t matter if I use localhost/OMG or a virtual host address om.g

I’ve been pulling my hair out, and cant find anything that works from inside the php.ini, etc…

Any help would be very much appreciated.


Source: stackoverflow-php

vagrant up by apache user

Install centos 6 on vagrant + virtualbox and make it php’s local development environment.

If you use vagrant, the execution user becomes ‘vagrant’, but is it possible to make it ‘apache’?

I created ssh.config in the directory where vagrantFile exists and tried to make user ‘apache’, but it did not work.

<?php exec("whoiam"); ?>  return 'vagrant'

The reason for this is because the executing user of the production environment is ‘apache’, so we want to match the local development environment with the production environment.


Source: stackoverflow-php

Running php on Hortonworks sandbox

I am trying to design my front end for my back end services which involves hadoop and hive. I was successfully able to open up XAMPP server on port 8085 of Hortonworks 2.4. For that I had to stop httpd services already running. I was also successfull in writing php codes which talked with mySQL services. However some thing astonishing which I have noticed right away is that on firing hadoop or hive or ambari commands, this is what I am getting –
-bash: hadoop: command not found
-bash: hive: command not found
-bash: ambari: command not found
I think there is some problem with PATH Variables. Can u help me identify root cause and is it possible to run XAMPP services from inside of HDP 2.4 (which uses CentOS 6.9) ?


Source: stackoverflow-php

Installing PHP 5.6 on El Capitan: Syntax error with httpd.conf

While I am using MAMP Pro, I need to install Elasticsearch via OS X itself. However, I also need PHP 5.6 and El Capitan comes with 5.5 (I’m using OS X 10.11.6).

I followed a set of PHP installation instructions which resulted in an error:

httpd: Syntax error on line 119 of /private/etc/apache2/httpd.conf:
Cannot load modules/mod_unixd.so into server:
dlopen(/usr/modules/mod_unixd.so, 10): image not found

I did a bit of Googling, but failed to find anything definitive, and I’d prefer not to go making changes to httpd.conf until I have clue one.

I’m using Apache…

Server version: Apache/2.4.18 (Unix)
Server built:   Feb 20 2016 20:03:19
Server's Module Magic Number: 20120211:52
Server loaded:  APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture:   64-bit
Server MPM:     prefork
  threaded:     no
    forked:     yes (variable process count)


Source: stackoverflow-php

Phabricator URL doesnt load in browser. But it works with curl

Unable to setup phabricator code review tool as we are unable to open the url link in browser. but the same is loading if we execute “curl localhost”.

Note (didnt configure db yet. configured using httpd as mentioned in document https://secure.phabricator.com/book/phabricator/article/configuration_guide/). disabled selinux and firewall

[root@localhost conf]# diff httpd.conf.orig httpd.conf
55a56,57
> LoadModule php5_module modules/libphp5.so
> LoadModule rewrite_module modules/mod_rewrite.so
57a60
>
79a83,95
> <VirtualHost *>
>   # Change this to the domain which points to your host.
>   ServerName phabricator.example.com
>
>   # Change this to the path where you put 'phabricator' when you checked it
>   # out from GitHub when following the Installation Guide.
>   #
>   # Make sure you include "/webroot" at the end!
>   DocumentRoot /usr/local/bin/phabricator/webroot
>   RewriteEngine on
>   RewriteRule ^(.*)$          /index.php?__path__=$1  [B,L,QSA]
> </VirtualHost>
>
119c135
< DocumentRoot "/var/www/html"
---
> #DocumentRoot "/var/www/html"
124,127c140,147
< <Directory "/var/www">
<     AllowOverride None
<     # Allow open access:
<     Require all granted
---
> #<Directory "/var/www">
> #    AllowOverride None
> #    # Allow open access:
> #    Require all granted
> #</Directory>
>
> <Directory "/usr/local/bin/phabricator/webroot">
>   Require all granted

curl localhost – tail output

<br />
Unable to establish a connection to any database host (while trying &quot;phabricator_config&quot;). All masters and replicas are completely unreachable.<br />
<br />
Make sure Phabricator and MySQL are correctly configured.</div>

The current Phabricator configuration has these 4 value(s):

mysql.host"localhost"
mysql.portnull
mysql.user"root"
mysql.passhidden

To update these 4 value(s), run these command(s) from the command line:

phabricator/ $ ./bin/config set mysql.host value
phabricator/ $ ./bin/config set mysql.port value
phabricator/ $ ./bin/config set mysql.user value
phabricator/ $ ./bin/config set mysql.pass value
To continue, resolve this problem and reload the page.
Host: localhost.localdomain

</body>

Source: stackoverflow-php

.htaccess mod_rewrite [P] for https

I have the following code in my .htaccess:

<ifModule mod_rewrite.c>
#RewriteCond %{HTTP_HOST} ^www.example.com
RewriteCond %{REQUEST_URI} !^/social/retrievePage.php 
RewriteRule ^(.*)$ http://www.example.com/social/retrievePage.php?page=%{REQUEST_URI} [P]
</ifModule>

Basically it will remap any page to https://www.example.com/social/retrievePage.php and pass REQUEST_URI as page GET parameter.

It works well, however when I set SSL certificate for my domain, this will not work as expected, instead of remapping it will make a 301 Redirect

I tried fixing the issue by replacing http to https from RewriteRule, got 500 Internal Server Error.

What could be the issue?


Source: stackoverflow-php

Rewrite URL remove part of URl

I need to remove part of a url.

http://whatever.net/api/users/1

I want to redirect this request without api in path to a file located in the same directory named api.php

I have following directory structure

root/api/{here we are}

So I need that my api.php file only receives user/1 in this case

I have tried the following configuration, but it doesn’t work – Not Found

<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^api/(.*)$ api.php/$1 [L]
</IfModule>

Please help to get this working


Source: stackoverflow-php

Transfering mod_rewrite rules from htaccess to nginx

I am trying to convert some already working rewrite rules from .htaccess to nginx.
My application has 2 modes, backend (all calls starting with /admin/*) and frontend, the rest of the calls. Backend requests get routed to admin.php while frontend ones get routed to index.php

This works great in apache but in nginx I can only get the frontend routing to work. The /admin/ requests do call the admin.php file but the php file is downloaded instead of being executed. I’ve already used http://winginx.com to convert my htaccess routes to nginx but I still can’t get it to work for /admin.

Can an nginx pro help me out with the proper config to do this?

This is my working .htaccess config:

<IfModule mod_rewrite.c>
#RewriteBase /

# Google sitemap.xml configuration
RewriteRule sitemap.xml$ /index.php?_extension=Frontend&_controller=Sitemap&action=googleSitemap [L,R=301]

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.+).(d+).(php|js|css|png|jpg|gif|gzip)$ $1.$3 [L]

# admin routes
RewriteRule ^/admin/(.*)$ admin.php?%{QUERY_STRING} [L]

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l

# frontend routes
RewriteRule .* index.php [L]

</IfModule>

and this is the nginx configuration that I tried so far…

server {
    listen       80;
    server_name  mydomain.local;
    root   /var/www/project;
    index  index.php index.html index.htm;
    access_log  /var/log/nginx/default-access.log  main;
    error_log   /var/log/nginx/default-error.log;

    error_page   500 502 503 504  /50x.html;

    location = /50x.html {
        root   /var/www/default;
    }

    location / {
        rewrite sitemap.xml$ /index.php?_extension=Frontend&_controller=Sitemap&action=googleSitemap redirect;
        try_files $uri $uri/ @rewrite;
    }
    location @rewrite {
        rewrite ^(.*)$ /index.php;
    }
    location /admin {
        rewrite ^/admin/(.*)$ /admin.php?$query_string break;
    }

    location ~ .php {
        include                  fastcgi_params;
        fastcgi_keep_conn on;
        fastcgi_index            index.php;
        fastcgi_split_path_info  ^(.+.php)(/.+)$;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        fastcgi_pass unix:/var/run/php-fpm.sock;
    }
}

with nginx on, doing a curl -s -D – ‘http://mydomain.local/frontend/call‘ | head -n 20 does return the Content-type as text/html while a call to curl -s -D – ‘http://cms.dev/admin/whatever‘ | head -n 20 returns the application/octet-stream content type which triggers the download.


Source: stackoverflow-php