awk -F ' ' '{ a[$5]++ } END { for (b in a) { print b } }' dm_documents_resource.sql
cut uniq column 5 tach boi ‘\ ‘ ‘ ‘ dau cach
pfccdlfdtte pcaccplircdt dklpcfrp?qeiq lhpqlipqeodf gpwafopwprti izxndkiqpkii krirrifcapnc dxkdciqcafmd vkfpcadf.
[ Code shift 1 ,2 - n word in †´˜çå´¨≥ç®ß®∫˚∆œ∑≥æç©≥π÷´ø˙¨
- 2 shift add position
"Sometimes you can have the smallest role in the smallest production and still have a big impact."
\
find . -name .svn -exec rm -rf {} \;
-------------------------
Sort de cut duplicate row (one column same value):
$ sort -t, -u -b -k1n text
542,8,1,418,1
542,9,1,418,1
199,7,1,419,10
301,34,1,689070,1
It is not sorting based on the first column.
Approach 2
$ sort -t, -u -b -k1n,1n text
199,7,1,419,10
301,34,1,689070,1
542,8,1,418,1
•••••••••••••••••••••••••••
Sort cột cần dùng cho lên đầu rồi dùng filter thấy dupli đầu thì bỏ qua.
Your best bet would be to either sort all columns:
sort -t, -nk1,1 -nk2,2 -nk3,3 -nk4,4 -nk5,5 -u text
or
use awk to filter duplicate lines and pipe it to sort.
awk '!_[$0]++' text | sort -t, -nk1,1
"How did Biot arrive at the partial differential equation? [the heat conduction equation] … Perhaps Laplace gave Biot the equation and left him to sink or swim for a few years in trying to derive it. That would have been merely an instance of the way great mathematicians since the very beginnings of mathematical research have effortlessly maintained their superiority over ordinary mortals." - Clifford Truesdell
Clifford Truesdell: http://en.wikipedia.org/wiki/Clifford_Truesdell
Heat conduction equation: http://en.wikipedia.org/wiki/Heat_equation
|
lệnh này sai :
for char in {a..j}; do mv netreal.dump.part_b$char netreal.dump.part_b$char_z; done
nhưng lệnh này ok:
for char in {a..z}; do mv b$char/netreal.dump.part_b$char .; done
Cat trik :
cat netreal.dump.part_be_P_ab |sed -n '833620,834097p' > list_correctjob_logs.sql
thêm tham số cho sed trên để → 1 bash mạnh .
|
sha1 encrpyt with salt not equal GCC encrpyt
Du la ham encrypt abnormal
hoac user, m_js, m_com .. khac
Postgres users
su postgres
< dump :) :| :(
|
CREATE DATABASE IF NOT EXISTS foo DEFAULT CHARACTER SET = 'utf8' DEFAULT COLLATE 'utf8_general_ci'
or
ALTER DATABASE foo DEFAULT CHARACTER SET = 'utf8' DEFAULT COLLATE 'utf8_general_ci'
|
|
Su postgres
tables:
for tbl in `psql -qAt -c "select tablename from pg_tables where schemaname = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
Sequences:
for tbl in `psql -qAt -c "select sequence_name from information_schema.sequences where sequence_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
Views:
for tbl in `psql -qAt -c "select table_name from information_schema.views where table_schema = 'public';" YOUR_DB` ; do psql -c "alter table $tbl owner to NEW_OWNER" YOUR_DB ; done
|
1042 rm -rf netreal.dump.part_a*
1046 rm -rf netreal.dump.part_b[k-z]
1065 vim /etc/init.d/postgresql
1066 vim /etc/init.d/postgresql5433
1067 vim /etc/init.d/postgresql
1068 /etc/init.d/postgresql stop
1069 psql -u postgres -l
1070 psql -U postgres -l
1071 rpm -qa | grep postgres
1072 netstat -anp | less
1074 mv data data.bak
1075 ln -s /home/data
1076 ls -l
1077 cp -av data.bak /home/data
[10-May-2014 06:32:51 Europe/Berlin] PHP Fatal error: Maximum execution time of 300 seconds exceeded in /Users/dungnv/Documents/SymApp/netreal-2.0/app/libraries/phpass-0.1/PasswordHash.php on line 210
DELETE FROM dups a USING (
SELECT MIN(ctid) as ctid, key FROM dups GROUP BY key HAVING COUNT(*) > 1 ) b WHERE a.key = b.key AND a.ctid <> b.ctid |
cool psql dup remove;
delete from fax_api_results far using(select min(ctid) as ctid, id from fax_api_results group by id having count(*) > 1) far2 where far.id = far2.id and far.ctid <> far2.ctid;
SELECT 1
WHERE NULL NOT LIKE '%test%' |
Netreal Bug may be occur:
1. DB config mysql -> mysqli in API
2. JS hard code in member js
3. Fax, list n API code email … still not good.
4. DB postgres last edit
5.. Convert to mysql
6. Strange dump data
#!/bin/bash
echo "Let's get started."
echo Installing generators
gsed -i 's/"4.1.*"/"4.1.*",\n\t\t"way\/generators": "1.1"/' composer.json
gsed -i "s/WorkbenchServiceProvider',/WorkbenchServiceProvider',\n\t\t'Way\\\Generators\\\GeneratorsServiceProvider'/" app/config/app.php
echo Updating composer
composer update
echo Creating MySQL database
mysql -uroot -p -e "CREATE DATABASE larademos"
echo Updating database configuration file
gsed -i "s/'database' => 'database'/'database' => 'larademos'/" app/config/database.php
gsed -i "s/'password' => ''/'password' => '1234'/" app/config/database.php
echo -n "Do you need a users table? [yes|no] "
read -e ANSWER
if [ $ANSWER = 'yes' ]
then
echo Creating users table migration
php artisan generate:migration create_users_table --fields="username:string:unique, email:string:unique, password:string"
echo Migrating the database
php artisan migrate
fi
FFFMEG, IMAGICK composite face to video
FOR LOOP handle big data file
wget crawler
SELECT TABLE_NAME, TABLE_ROWS FROM `information_schema`.`tables` WHERE `table_schema` = 'redmine_development';
http://www.grymoire.com/Unix/Sed.html
http://www.thegeekstuff.com (ton of tut)
http://stackoverflow.com/questions/1251999/how-can-i-replace-a-newline-n-using-sed
------------------------------------------------------------
awk '{ if (/^[^0-9]/) { next } # Skip lines which do not hold key values
if (FNR==NR) { main[$0]=1 } # Process keys from file "mainfile"
else if (main[$0]==0) { keys[$0]=1 } # Process keys from file "keys"
} END { for(key in keys) print key }' \
"mainfile" "keys" >"keys.not-in-main"
This answer is based on the awk answer posted by potong..It is twice as fast as the comm method (on my system), for the same 6 million lines in main-file and 10 thousand keys... (now updated to use FNR,NR)
Although awk is faster than your current system, and will give you and your computer(s) some breathing space, be aware that when data processing is as intense as you've described, you will get best overall results by switching to a dedicated database; eg. SQlite, MySQL...
cut -d ' ' --complement -f -2,4-6,10-12 file.txt
EDIT:
From additional information from the comments:
< file.txt awk '{ print $3, $7, $8, $9 }' | column -t
Results:
KL1 -7.299 41.933 48.192
G 39.541 25.078 -2.722
Mysql Find column exist in tables.
ReplyDeleteSELECT *
FROM information_schema.COLUMNS
WHERE
TABLE_SCHEMA = 'db_name'
AND TABLE_NAME = 'table_name'
AND COLUMN_NAME = 'column_name'