昨天突然发现windows 7没法休眠了
症状是:
点休眠后屏幕会黑一下 然后就进入到了Login的画面了-。-
在网上搜了半天:“windows 7无法休眠”之类的
基本上都在说什么:更新驱动哇,设置power configuration之类的。。
都没用。。。
正在郁闷 于是用英文搜了一下 windows 7 cannot hibernate
(休眠是Hibernate 睡眠是sleep 哈哈~)
终于眼睛一亮看到一个[Solved]GRUB + Windows 7 = Can't put windows to sleep/hibernate
特别是这个solved哇 真是救命稻草哇
链接如下:http://ubuntuforums.org/showthread.php?t=1341694
然后基本按照里面的步骤 就OK了
如果你的电脑也装了ubuntu之类的 应该就是这个问题没跑儿了
如果你只有windows... 那就另寻他法吧-。-
这里我把英文大概翻译一下~
症状:
在装了Windows 7之后装Ubuntu双系统的机子(这个双系统的顺序也是有讲究了。。主要是看你是不是用GRUB引导系统),这个你就把GRUB装在了MBR上,这个就会导致Windows没法进入sleep or hibernate模式(这个屏幕会黑一下,然后直接回来)。用WIN7的DVD恢复这个MBR到原来的状态呢是可以解决的,但是会让你的GRUB又没法工作了...
实际问题:
要让sleep/hibernate工作,第一个windows的分区必须是标记为boot,即使里面装了GRUB。
解决方案:
在Ubuntu下, 使用gparted(如果没装的话 sudo apt-get install gparted。使用就是sudo gparted)。在你的boot driver(通常是/dev/sda),确保第一个windows分区是标为boot的。然后重启就OK了
P.S. 这个帖子的回复也值得一看~
一般用Thinkpad的同学常常会遇到这个问题,因为Thinkpad有个默认的分区SYSTEM_DRV在C盘之前。一般它是作为boot分区的。所以我们只要把Windows系统分区作为boot就好了。
这个故事告诉我们,掌握英文搜索能力很重要...
Wednesday, March 28, 2012
Monday, March 26, 2012
jQuery中设置click事件的参数
(部分转自百度知道)
【问题】
原来html中onclick出发js方法可以传递参数
<a href="#" onClick="showFile('view');">aaaaa</a>
<script>function showFile(fun){}</script>
但是 现在用jquery的click事件怎么传递这个参数?
<a href="#" id="fun">aaaaa</a>$("#fun").click(function () { });
【解答】
1. jQuery的click事件不能直接传递参数,
如果使用 $('#fun').click(choose("val"));
会导致在运行这个语句时直接运行choose函数。
应该使用 $('#fun').click(choose);
function choose() {
//...
}
2. 可以把
$('#fun').click(function () {
});
看作一个function声明,
就相当于onclick的事件声明;
$('#fun').click(function () {
function()(调用的方法)
});
3. 可以使用标签内的attr来获得关于此标签的参数
<a id="fun" testvalue='abc' href="#" onClick="showFile('view');">aaaaa</a>
$('#fun').click(function () {
alert($(this).attr('testvalue'));
alert($(this).text());
alert($(this).attr('href'));
//......
});
【问题】
原来html中onclick出发js方法可以传递参数
<a href="#" onClick="showFile('view');">aaaaa</a>
<script>function showFile(fun){}</script>
但是 现在用jquery的click事件怎么传递这个参数?
<a href="#" id="fun">aaaaa</a>$("#fun").click(function () { });
【解答】
1. jQuery的click事件不能直接传递参数,
如果使用 $('#fun').click(choose("val"));
会导致在运行这个语句时直接运行choose函数。
应该使用 $('#fun').click(choose);
function choose() {
//...
}
2. 可以把
$('#fun').click(function () {
});
看作一个function声明,
就相当于onclick的事件声明;
$('#fun').click(function () {
function()(调用的方法)
});
3. 可以使用标签内的attr来获得关于此标签的参数
<a id="fun" testvalue='abc' href="#" onClick="showFile('view');">aaaaa</a>
$('#fun').click(function () {
alert($(this).attr('testvalue'));
alert($(this).text());
alert($(this).attr('href'));
//......
});
jQuery获取input标签的值
一般来说,用js获取input标签内的值会用
<input id="p_folder"></input>
var p = document.getElementById("p_folder");
var pV = p.value;
但是jQuery中,如果写成
var p = $('p_folder');
var pV = p.value;
将无法获取到标签内的值,
这是因为
$("")是一个jQuery对象,而不是一个DOM element
value是DOM element的属性,对应jQuery的val
val():获得第一个匹配元素的当前值
val("val"):设置每一个匹配元素的值为val
所以上面的code应该写成
var p = $('p_folder');
var pV = p.val();
<input id="p_folder"></input>
var p = document.getElementById("p_folder");
var pV = p.value;
但是jQuery中,如果写成
var p = $('p_folder');
var pV = p.value;
将无法获取到标签内的值,
这是因为
$("")是一个jQuery对象,而不是一个DOM element
value是DOM element的属性,对应jQuery的val
val():获得第一个匹配元素的当前值
val("val"):设置每一个匹配元素的值为val
所以上面的code应该写成
var p = $('p_folder');
var pV = p.val();
Friday, March 2, 2012
Summary of Compressed Video Sensing
[1] proposed to use cs on stream video by sample several frame together or independently.... But it didn't consider the interframe redundancy.
[2] was focus on increasing the resolution of digital video, thus little work was done for video coding/compression.
[3] [4] proposed compressed video sensing in 2008.
[3] used a hybrid way to compress video. The main contribution I think was only the scheme it proposed: transmit both conventionally encoding(low resolution) and cs encoding(high resolution) video stream, recon. on demand (if coarse-scale -> conventionally decoding, if fine-scale, cs decoding).
Compared to [3], I think [4] is much important for CVS. The way it employed is classifying the blocks of a frame to dense and sparse via a cs testing. Dense blocks use conventional encoding, while sparse blocks use cs. The cs testing for a block of frame is another contribution should be noticed.
In 2009, most work were focus on distributed CVS based on the notion of Distributed Video Coding(DVC). [5] use reconstructed key frame to find sparse basis for cs frame, and it proposed L1, SKIP, SINGLE modes for cs frames. The codec is quite similar with pixel-domain dvc. [6] also use reconstructed key frame to generate side information. But its side information is not the sparse basis, but a prediction. Furthermore, [6] use both frame-based and block-based encoding for cs frames. It is quite novel. But I think although it improves the performance, but a little redundant. Different with [5][6], [7] use cs for both key frame and non-key frame. And it proposed the modified GPSR for DCVS. Furthermore, there are relatively complete review of techniques like cs, dvc, dcs, etc., which I think is quite useful for beginners in this area.
[8] proposed a very interesting multiscale framework. It employs LIMAT[11] framework to exploit motion information and remove temporal redundancies. And it use iterative multiscale framework: reconstructing successively finer resolution approximation to each frame using motion vectors estimated at coarser scales, and alternatively using these approximation to estimate the motion. The multiscale framework essentially exploit the feature of wavelet transformation (coarse scale and fine scale).
[10] is published in 2011. It designs the cross-layer system for video transmission using compressed sensing.
The cross-layer system jointly controls video encoding rate, transmission rate, channel coding rate. It is useful for researchers who focus on network design of a compressive sensing application.
[9] is not about CVS, but I think it's very important to know current video compression techniques. It introduced the video compression techniques like H.26x, MPEG, etc. It's a very good introduction and review work.
Another thing should be mentioned is that, Distributed Compressed Video Coding in [5] [6] both used the notion: the sparsest representation of a frame is a combination of neighbor blocks of a block.
[1] Compressive imaging for video representation and coding
[2] Compressive coded aperture video reconstruction
[3] Compressed video sensing
[4] Compressive Video Sampling
[5] Distributed video coding using compressive sampling
[6] Distributed compressed video sensing
[7] Distributed compressive video sensing
[8] A multiscale framework for compressive sensing of video
[9] Video Compression Techniques: An Overview
[10] Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks
[11] Lifting-based invertible motion adaptive transform framework for highly scalable video compression
[2] was focus on increasing the resolution of digital video, thus little work was done for video coding/compression.
[3] [4] proposed compressed video sensing in 2008.
[3] used a hybrid way to compress video. The main contribution I think was only the scheme it proposed: transmit both conventionally encoding(low resolution) and cs encoding(high resolution) video stream, recon. on demand (if coarse-scale -> conventionally decoding, if fine-scale, cs decoding).
Compared to [3], I think [4] is much important for CVS. The way it employed is classifying the blocks of a frame to dense and sparse via a cs testing. Dense blocks use conventional encoding, while sparse blocks use cs. The cs testing for a block of frame is another contribution should be noticed.
In 2009, most work were focus on distributed CVS based on the notion of Distributed Video Coding(DVC). [5] use reconstructed key frame to find sparse basis for cs frame, and it proposed L1, SKIP, SINGLE modes for cs frames. The codec is quite similar with pixel-domain dvc. [6] also use reconstructed key frame to generate side information. But its side information is not the sparse basis, but a prediction. Furthermore, [6] use both frame-based and block-based encoding for cs frames. It is quite novel. But I think although it improves the performance, but a little redundant. Different with [5][6], [7] use cs for both key frame and non-key frame. And it proposed the modified GPSR for DCVS. Furthermore, there are relatively complete review of techniques like cs, dvc, dcs, etc., which I think is quite useful for beginners in this area.
[8] proposed a very interesting multiscale framework. It employs LIMAT[11] framework to exploit motion information and remove temporal redundancies. And it use iterative multiscale framework: reconstructing successively finer resolution approximation to each frame using motion vectors estimated at coarser scales, and alternatively using these approximation to estimate the motion. The multiscale framework essentially exploit the feature of wavelet transformation (coarse scale and fine scale).
[10] is published in 2011. It designs the cross-layer system for video transmission using compressed sensing.
The cross-layer system jointly controls video encoding rate, transmission rate, channel coding rate. It is useful for researchers who focus on network design of a compressive sensing application.
[9] is not about CVS, but I think it's very important to know current video compression techniques. It introduced the video compression techniques like H.26x, MPEG, etc. It's a very good introduction and review work.
Another thing should be mentioned is that, Distributed Compressed Video Coding in [5] [6] both used the notion: the sparsest representation of a frame is a combination of neighbor blocks of a block.
[1] Compressive imaging for video representation and coding
[2] Compressive coded aperture video reconstruction
[3] Compressed video sensing
[4] Compressive Video Sampling
[5] Distributed video coding using compressive sampling
[6] Distributed compressed video sensing
[7] Distributed compressive video sensing
[8] A multiscale framework for compressive sensing of video
[9] Video Compression Techniques: An Overview
[10] Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks
[11] Lifting-based invertible motion adaptive transform framework for highly scalable video compression
Review of the size of Measurement Matrix in Compressed Sensing
* This article is only a review for my personal use. There may have some mistakes. Please do not trust in this article.
* If you noticed any mistakes in this article, please notify me. Thanks a lot.
In the following description, M is the number of measurements needed, N is the length of signal, K is sparsity, Φ is measurement matrix, Ψ is sparse basis. C is a constant.
$Y = AX = \Phi\Psi X$
The requirement for $M$
[1] Sparsity and Incoherence in compressive sampling
$M \geq C \cdot \mu^2(\Phi, \Psi) \cdot K {\text log}N$
[2] An introduction to Compressive Sampling
Form A obeying RIP i)-iv)
$M = O(K {\text log}(N/K))$
$M \geq C \cdot K {\text log}(N/K)$
i)-iii) see
[3] A simple proof of the restricted isometry property for random matrices
iv) see
[4] Uniform uncertainty principles for Bernoulli and sub-gaussian ensembles
Form A by first finding paris of incoherent orthobases $\Phi, \Psi$, and then exracting $M$ coordinates uniformly at random using R: $A = R\Phi\Psi$.
$M \geq C \cdot ({\text log}N)^4$
$M \geq C \cdot ({\text log}N)^5$ for a lower probability of failure
see [6] and [7] On sparse reconstruction from Fourier and Gaussian measurements
[6] Near-optimal signal recovery from random projections and universal encoding strategies
[8] Compressed Sensing, D.L.Donoho
[9] Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
[10] Neighborliness of randomly projected simplices in high dimensions
[11] High-dimensional centrally symmetreic polytopes with neighborliness proportional to dimension
$M = O(K {\text log}(N))$
$M \geq C \cdot K \cdot {\text log}N$
[12] Compressive Sensing, R.G.Baranuik
$M \geq C \cdot K {\text log}(N/K)$
it cited the result from [8] and [9]. Are they the same?
To summary, there are 4 expressions:
1)
$M = O(K {\text log}(N))$
$M \geq C \cdot K \cdot {\text log}N$
2)
$M \geq C \cdot K {\text log}(N/K)$
3)
$M \geq C \cdot ({\text log}N)^4$
$M \geq C \cdot ({\text log}N)^5$
4)
$M \geq C \cdot \mu^2(\Phi, \Psi) \cdot K {\text log}N$
1) and 2) are quite similar with each other; 1) is noisy situation and 2) is noiseless.
4) is quite similar with 1), except the parameter $\mu^2(\Phi, \Psi)$, which is a measure for the incoherence between the two matrix.
* If you noticed any mistakes in this article, please notify me. Thanks a lot.
In the following description, M is the number of measurements needed, N is the length of signal, K is sparsity, Φ is measurement matrix, Ψ is sparse basis. C is a constant.
$Y = AX = \Phi\Psi X$
The requirement for $M$
[1] Sparsity and Incoherence in compressive sampling
$M \geq C \cdot \mu^2(\Phi, \Psi) \cdot K {\text log}N$
[2] An introduction to Compressive Sampling
Form A obeying RIP i)-iv)
$M = O(K {\text log}(N/K))$
$M \geq C \cdot K {\text log}(N/K)$
i)-iii) see
[3] A simple proof of the restricted isometry property for random matrices
iv) see
[4] Uniform uncertainty principles for Bernoulli and sub-gaussian ensembles
Form A by first finding paris of incoherent orthobases $\Phi, \Psi$, and then exracting $M$ coordinates uniformly at random using R: $A = R\Phi\Psi$.
$M \geq C \cdot ({\text log}N)^4$
$M \geq C \cdot ({\text log}N)^5$ for a lower probability of failure
see [6] and [7] On sparse reconstruction from Fourier and Gaussian measurements
[6] Near-optimal signal recovery from random projections and universal encoding strategies
[8] Compressed Sensing, D.L.Donoho
[9] Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
[10] Neighborliness of randomly projected simplices in high dimensions
[11] High-dimensional centrally symmetreic polytopes with neighborliness proportional to dimension
$M = O(K {\text log}(N))$
$M \geq C \cdot K \cdot {\text log}N$
[12] Compressive Sensing, R.G.Baranuik
$M \geq C \cdot K {\text log}(N/K)$
it cited the result from [8] and [9]. Are they the same?
To summary, there are 4 expressions:
1)
$M = O(K {\text log}(N))$
$M \geq C \cdot K \cdot {\text log}N$
2)
$M \geq C \cdot K {\text log}(N/K)$
3)
$M \geq C \cdot ({\text log}N)^4$
$M \geq C \cdot ({\text log}N)^5$
4)
$M \geq C \cdot \mu^2(\Phi, \Psi) \cdot K {\text log}N$
1) and 2) are quite similar with each other; 1) is noisy situation and 2) is noiseless.
4) is quite similar with 1), except the parameter $\mu^2(\Phi, \Psi)$, which is a measure for the incoherence between the two matrix.
Thursday, March 1, 2012
Some Ideas
记录下最近的idea
最主要的project还是paper wiki 希望能在放假前搭好这个平台
又想做一个类似吾志 但是只是记录/share dream的网站 创意来源于xxxHolic的梦买 这个可能需要更多的考虑 比如是否应该做成一个网站还是只是一个现有SNS的app 如果是APP 怕破坏的这个idea的pure goal 但是做成一个website 又觉得没有特别大的市场 需要数据库支持etc..
但是似乎做起来不难 只是要去找现成的框架
还想做一个创意分享网站 关注于生活中想到的小创意 寻找有类似创意的人一起开发app之类的 或者像服务商提供需求建议 etc.
还是觉得这种类型的网站 美工很重要的样子
还想做一个类似WikiCFP的网站 主要用于发布/收集各类竞赛信息 方便高校学生查询/参与 帮助企业推广竞赛 这也是因为发现很多competition都在想方设法提高自己的知名度 这个似乎做起来也不难 几乎可以完全copy WikiCFP 不过后期做好也需要美工方面的UI设计
还有一些实现有些困难的东西了 呵呵
比如,做一个做饭/洗碗机器人 可以输入菜谱程序 然后就可以做饭了~。~ 还可以定时做饭 远程控制做饭...
* 这个还可以开发app平台 做一个app推荐菜谱 or 根据家里的菜规划要做的菜(对于一次买N天菜的人) 提醒菜过期...
还有就是智能白板(笔) 可以在玻璃板/白板上演算/演示/记录... 然后用笔选取区域 即可传送到终端(电脑 打印机....)
可是都不是我的专业 也不能用来作为现在的Research 只能业余的时间做做了
要是不用学习不用工作就可以生活 估计就可以专心做这个了
大概只能继续读PhD了吧 呵呵
最主要的project还是paper wiki 希望能在放假前搭好这个平台
又想做一个类似吾志 但是只是记录/share dream的网站 创意来源于xxxHolic的梦买 这个可能需要更多的考虑 比如是否应该做成一个网站还是只是一个现有SNS的app 如果是APP 怕破坏的这个idea的pure goal 但是做成一个website 又觉得没有特别大的市场 需要数据库支持etc..
但是似乎做起来不难 只是要去找现成的框架
还想做一个创意分享网站 关注于生活中想到的小创意 寻找有类似创意的人一起开发app之类的 或者像服务商提供需求建议 etc.
还是觉得这种类型的网站 美工很重要的样子
还想做一个类似WikiCFP的网站 主要用于发布/收集各类竞赛信息 方便高校学生查询/参与 帮助企业推广竞赛 这也是因为发现很多competition都在想方设法提高自己的知名度 这个似乎做起来也不难 几乎可以完全copy WikiCFP 不过后期做好也需要美工方面的UI设计
还有一些实现有些困难的东西了 呵呵
比如,做一个做饭/洗碗机器人 可以输入菜谱程序 然后就可以做饭了~。~ 还可以定时做饭 远程控制做饭...
* 这个还可以开发app平台 做一个app推荐菜谱 or 根据家里的菜规划要做的菜(对于一次买N天菜的人) 提醒菜过期...
还有就是智能白板(笔) 可以在玻璃板/白板上演算/演示/记录... 然后用笔选取区域 即可传送到终端(电脑 打印机....)
可是都不是我的专业 也不能用来作为现在的Research 只能业余的时间做做了
要是不用学习不用工作就可以生活 估计就可以专心做这个了
大概只能继续读PhD了吧 呵呵
Subscribe to:
Posts (Atom)